Association for Computing Machinery Selects 71 Professionals for Outstanding Achievements NEW YORK, Jan. 21, 2026 /PRNewswire/ — ACM, the Association for ComputingAssociation for Computing Machinery Selects 71 Professionals for Outstanding Achievements NEW YORK, Jan. 21, 2026 /PRNewswire/ — ACM, the Association for Computing

Excellence and Impact Recognized by World’s Preeminent Computing Society

Association for Computing Machinery Selects 71 Professionals
for Outstanding Achievements

NEW YORK, Jan. 21, 2026 /PRNewswire/ — ACM, the Association for Computing Machinery, has named 71 new Fellows. ACM Fellows are registered members of the society and were selected by their peers for achieving remarkable results through their technical innovations and/or service to the field. This year’s honorees hail from 14 countries and were chosen from among ACM’s global membership of more than 100,000 computing professionals.

This year’s honorees hail from 14 countries and were chosen from among ACM’s global membership.

“These men and women represent the top 1% of professionals in our association,” explained ACM President Yannis Ioannidis. “I personally enjoy reviewing the list of achievements of the new Fellows because it offers a snapshot of what’s happening in our field at the moment. This year, for example, we are honoring members working in well-established disciplines such as computer architecture and software engineering, alongside innovators in emerging disciplines like swarm intelligence or scene recognition. As we congratulate the new Fellows for their accomplishments, we hope that their work will also serve as an inspiration to the next generation. We especially encourage those who are not yet ACM members to join us and become part of a vibrant global community.”

The 2025 ACM Fellows work at leading universities, corporations, and research institutions in Australia, Belgium, Canada, China, France, Germany, Israel, Italy, Japan, Saudi Arabia, Singapore, Switzerland, the United Kingdom, and the United States. This year’s Fellows are cited for contributions in a wide range of computing research areas including AI for healthcare, computer graphics, data management, electronic mail, human-computer interaction, mobile computing, networked systems, robotics, security, sustainability, and numerous other areas.

ACM Fellows serve as ambassadors for the organization and are often called upon to offer their expertise to the media, public officials, and industry leaders. The 2025 ACM Fellows will be formally recognized during an awards banquet on June 13 in San Francisco.

2025 ACM Fellows

Eytan Adar
University of Michigan
For research in data mining, data visualization, and social computing.

Gail-Joon Ahn
Arizona State University
For contributions to the foundations and practical applications of information and systems security, including formal models and policy frameworks.

Eric Allman
NA
For the development of electronic mail.

Sven Apel
Saarland University
For theories and methods that empower humans to comprehend, construct, and optimize software systems.

Lujo Bauer
Carnegie Mellon University
For contributions to access control, usable security and privacy, and the security of machine learning.

Angela Bonifati
Université Claude Bernard Lyon 1
For contributions to the foundations graph databases and data integration.

Rajkumar Buyya
The University of Melbourne 
For research contributions to cost and energy-efficient resource management and scheduling systems for cloud computing.

George Candea
EPFL
For contributions to dependable systems blending operating systems, formal methods, and machine virtualization

Pei Cao
YouTube
For contributions to web caching, search engine efficiency, and information quality.

Franck Cappello
Argonne National Laboratory
For contributions to parallel/distributed computing, resilience, and scientific data reduction.

Luca P. Carloni
Columbia University
For contributions to the design of system-on-chip architectures and heterogeneous computing platforms.

Sheelagh Carpendale
Simon Fraser University 
For contributions to expanding the diversity of data comprehension through innovative interactive visualizations.

Swarat Chaudhuri
University of Texas, Austin 
For the development of tools for reliable systems, via approaches connecting formal methods and machine learning.

Baoquan Chen
Peking University
For contributions to large-scale scene reconstruction, discrete geometry processing, and shape design for manufacturing.

Deming Chen
University of Illinois at Urbana-Champaign
 
For contributions to reconfigurable computing, including synthesis algorithms and customizable AI accelerator design methodologies.

Kwang-Ting Cheng
Hong Kong University of Science and Technology
For contributions to design automation and software-hardware co-design of electronic circuits and computing systems.

Cristina Conati
University of British Columbia
For contributions to research in Human-AI interaction and AI-driven personalization.

Marco Dorigo
Universite Libre de Bruxelles
For establishing swarm-intelligence as a research field.

George Drettakis
INRIA
For contributions to computer graphics, image based rendering, and visual computing.

Nandita Dukkipati
Google
For contributions to congestion control, transport performance, and end-host network stacks.

Javier Esparza 
TU Munich
For contributions to the theory of program verification and concurrent systems.

Paolo Ferragina
Sant’Anna School of Advanced Studies, Pisa
For contributions to data structures and algorithms for efficient search and data compression.

Yun Raymond Fu
Northeastern University
For contributions to representation learning, computer vision, face and gesture recognition.

Michael L Gleicher
University of Madison at Wisconsin
For contributions to graphics, multimedia, visualization, and robotics.

Wolfgang Heidrich
King Abdullah University of Science and Technology
For contributions to computational photography and displays, and to high dynamic range imaging and display.

Steve Hodges
Lancaster University
For contributions to interactive device and systems research resulting in widely adopted impactful products.

Zi Helen Huang
University of Queensland, Australia
For contributions to large-scale multimedia content understanding, indexing and retrieval.

Odest Chadwicke Jenkins
University of Michigan
For contributions to robot learning and broadening participation in robotics and AI.

Jiaya Jia
HKUST
For contributions to segmentation, scene parsing and texture analysis in computer vision.

Xiaohua Jia
City University of Hong Kong
For contributions to the advancement of data security and distributed computing systems.

Hai Jin
Huazhong University of Science and Technology
For contributions to efficient data-centric processing, memory management, and distributed system architectures.

Ken-ichi Kawarabayashi
National Institute of Informatics and The University of Tokyo
For his contributions to graph theory, graph algorithms, and their applications.

Aggelos Kiayias
Universtity of Edinburgh
For contributions to the principles and practice of cyber security and cryptography.

Tadayoshi Kohno
Georgetown University 
For leadership in cybersecurity for emerging technologies, security ethics, and sociotechnical security.

Wolfgang Lehner
Technische Universitat Dresden/ Aalborg University
For contributions to architectures of main-memory database management systems.

Jian Ma
Carnegie Mellon University
For contributions to computational biology algorithms and machine learning.

Ratul Mahajan
University of Washington
For contributions to network verification and network control systems and their transfer to industrial practice.

Athina Markopoulou
UC Irvine
For contributions to internet measurement and privacy enhancing technologies.

Nenad Medvidovic
University of Southern California
For contributions to the foundations of software architectures and their application to software engineering.

Tao Mei
HiDream.ai
For contributions to multimedia analysis, retrieval, and applications.

Tommaso Melodia
Northeastern University
For contributions to open radio access network architectures and AI-native wireless networks.

Dejan S Milojicic
Hewlett-Packard Labs
For contributions to software and hardware architectures of distributed systems, from high performance computing to cloud computing.

Alistair M Moffat
University of Melbourne
For contributions to the implementation and evaluation of search engines.

Mohamed F Mokbel
University of Minnesota
For contributions in building scalable spatially-aware systems.

Peter Mueller
ETH Zurich
For contributions to automated, modular program verification.

Madanlal Musuvathi
Microsoft Research
For the development of methods in concurrency verification and testing, and machine learning systems design.

Noam Nisan
Hebrew University
For contributions to complexity theory, and for pioneering the field of economics and computation.

Alessandro Orso
University of Georgia
For contributions in developing novel and impactful techniques for software testing and debugging.

Themis Palpanas
Université Paris Cité
For contributions to time-series/data-series/vector indexing and similarity-search, anomaly detection, and entity resolution.

Denys Poshyvanyk
William & Mary
For the development of deep learning and software analytics methods that transformed software engineering research and practice.

Ariel Procaccia
Harvard University
For contributions to AI, algorithms, and society, including foundational work and practical impact.

Theodore Rappaport
New York University
For research contributions in wireless networks.

Sylvia Ratnasamy
UC Berkeley 
For contributions to networks and networked systems.

Oded Regev
NYU
For contributions to computational complexity, lattice-based cryptography, and quantum computing.

Cynthia Rudin
Duke University
For contributions to and leadership in interpretable machine learning and societal applications.

Natarajan Shankar
SRI International
For contributions in automated reasoning, mechanized metatheory, formalized mathematics, formal methods, and system assurance.

Yan Solihin
University of Central Florida
For contributions to shared cache architecture and trusted execution environment design.

Kate Starbird
University of Washington
For contributions to understanding and improving information ecosystems, including during crisis events and addressing misinformation.

Gookwon Edward Suh
Nvidia and Cornell University
For contributions to the development of secure hardware circuits and processors.

Kian-Lee Tan
National University of Singapore
For contributions to query optimization and processing for advanced database applications.

Hanghang Tong
University of Illinois Urbana-Champaign
For contributions to the theories, algorithms, and applications of large-scale graph mining.

Antonio Torralba
MIT
For contributions to computer vision, including novel datasets, scene recognition and context understanding.

Stephanie Weirich
University of Pennsylvania
For contributions to static type systems and mechanized mathematics of programming languages.

Adam Wierman
California Institute of Technology
For contributions to online algorithms, scheduling theory, and applications to sustainable computing.

Rebecca N. Wright
Barnard College
For contributions to security and privacy, and for leadership in computing research and education.

Hui Xiong
The Hong Kong University of Science and Technology (Guangzhou)
For research contributions to the advancement of AI and mobile computing.

Li Xiong
Emory University
For contributions to static type systems and mechanized mathematics of programming languages.

Junfeng Yang
Columbia University
For leadership and contributions to trustworthy software and AI systems.

Ke Yi
Hong Kong University of Science and Technology
For contributions to the theory and practice of query processing.

Yu Zheng
Jingdong Technology Inc.
For contributions to spatio-temporal data mining and urban computing.

Jun Zhu
Tsinghua University
For contributions to the theory and methods of probabilistic machine learning.

About the ACM Recognition Program
The ACM Fellows program, initiated in 1993, celebrates the exceptional contributions of the leading members in the computing field. To be selected as an ACM Fellow, a candidate’s accomplishments are expected to place him or her among the top 1% of ACM members. These individuals have helped to enlighten researchers, developers, practitioners, and end users of information technology throughout the world. The ACM Distinguished Member program, initiated in 2006, recognizes those members with at least 15 years of professional experience who have made significant accomplishments or achieved a significant impact on the computing field. ACM Distinguished Membership recognizes up to 10% of ACM’s top members. The ACM Senior Member program, also initiated in 2006, includes members with at least 10 years of professional experience who have demonstrated performance that sets them apart from their peers through technical leadership, technical contributions, and professional contributions. ACM Senior Member status recognizes the top 25% of ACM Professional Members. The new ACM Fellows, Distinguished Members, and Senior Members join a list of eminent colleagues to whom ACM and its members look for guidance and leadership in computing and information technology.

About ACM 
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting computing educators, researchers, and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/excellence-and-impact-recognized-by-worlds-preeminent-computing-society-302667068.html

SOURCE Association For Computing Machinery

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares launches JitoSOL staking ETP on Euronext, offering European investors regulated access to Solana staking rewards with additional yield opportunities.Read
Share
Coinstats2026/01/30 12:53
Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Robinhood, Sony and trading firms back Series B extension as institutional crypto trading platform expands into traditional asset tokenization
Share
Blockhead2026/01/30 13:30
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40