The public availability of GenAI landing in 2022 fundamentally changed the way individuals, and businesses, carry on with their pre-GenAI way of life. Tabling theThe public availability of GenAI landing in 2022 fundamentally changed the way individuals, and businesses, carry on with their pre-GenAI way of life. Tabling the

Make AI Safe, Before You Make It Smart

The public availability of GenAI landing in 2022 fundamentally changed the way individuals, and businesses, carry on with their pre-GenAI way of life. Tabling the opinions if hype or immediate value, for the moment, the disruptive force was unparallel in history. The unprecedent rate of adoption led businesses to assess if investments, and strategy needed to evolve in haste.  

In 2025, we have seen not only massive diversity in the application of AI capabilities across all industries, but also a rapid evolution of the technology provided by vendors, which continues at a pace the world is barely able to keep up with. At a time when research has indicated that  95% of AI projects fail, how can security, compliance and risk teams help shepherd their organisations’ adoption of AI while minimising inadvertent risk exposures that could be exploited by criminals or trigger regulatory scrutiny and penalties?   

Each new development in AI pushes to challenge cybersecurity readiness. However, the hype should not distract cybersecurity professionals from the bottom line: if AI isn’t properly governed, and built with security from the ground up, the odds of negative outcomes will far outweigh the ability of security teams to keep up. 

The importance of Secure-by-Design’ in cybersecurity AI 

‘Secure-by-design’ is more than a marketing slogan. It is the principle that culture must be celebrated with security baked in from the ground up, as opposed to being bolted on as a patch or an afterthought. In practice, this means technology must be designed to reasonably protect against malicious actors, safeguard sensitive data and defend the connected infrastructure that organisations rely on.  

AI introduces new stakes. Given that AI systems make decisions at scale and access massive datasets, any flaw or misconfiguration within the code can have far-reaching consequences. Organisations now face nearly 2,000 attacks per week, with the average breach costing $4.88 million. As AI becomes increasingly central to operations, a single vulnerability could lead to significant disruptions in business operations.  

Too often, companies fail to apply rigorous oversight to how AI systems are built, trained and deployed. AI also doesn’t operate in a vacuum. Most organisations rely on third-party vendors and external services for their AI solutions, which means secure-by-design needs to extend across an organisation’s entire supply chain. Every tool or platform introduced without strong safeguards increases the attack surface of an organisation, meaning that as digital ecosystems expand, so do the opportunities for threat actors to exploit increasing vulnerabilities. 

But organisations that make AI secure from the very beginning go beyond just protecting critical processes and information; rather, they create systems they can trust to innovate safely. In addition, regulators and industry standards are starting to demand this approach, making secure-by-design AI both a strategic and operational imperative.  

Overall, AI promises smarter defences, but if it’s not made secure by design, it risks becoming a bigger liability than the problems it’s supposed to solve.  

Using AI to strengthen cyber defences without compromising data privacy 

Traditional defences often miss what looks like “noise.” AI-powered systems built on secure-by-design principles can turn that noise into insight. Deep learning and Natural Language Processing (NLP) can correlate seemingly unrelated events, such as unusual login attempts and abnormal network traffic, to identify complex attack patterns.  

One of the big misconceptions, however, is that using AI for cybersecurity requires sharing large amounts of sensitive regulatory and compliance data. However, this shouldn’t be the case. Modern AI-powered Security Information and Event Management (SIEM) systems are designed to keep that data secure while they analyse enormous data volumes in real time using machine learning algorithms that establish baselines of “normal” behaviour and flag anomalies with exceptional precision.  

Extended Detection and Response (XDR) platforms further illustrate this shift toward AI-driven cybersecurity that is secure by design. By aggregating data from networks, cloud environments, endpoints and identity systems into a unified view, these platforms enable advanced behavioural analytics that continuously monitor user and entity activity. This modelling helps define normal behaviour across the digital ecosystem, allowing security teams to detect anomalies early, without compromising data privacy.   

To ensure AI is both effective and compliant, organisations should also apply a few practical deployment principles. Prioritise building tools that automate internal processes rather than directly analysing customer data. Wherever possible, process data locally rather than in external cloud analysis to reduce exposure risks. 

By embedding responsible AI practices and aligning them with GDPR requirements like data minimisation, purpose limitation, and accountability, these platforms operate in a compliant and ethical manner. Additionally, they should take emerging standards like the EU AI Act into account. Together, these platforms enable real-time threat response without compromising user trust or data integrity. 

Finally, AI deployment should be underpinned by clear contractual safeguards. That means data processing agreements that define how information is handled and retained, vendor warranties that guarantee customer data won’t be repurposed for training, and well-defined breach notification terms. Without these protections, even the most sophisticated AI risks becoming a compliance headache.  

When implemented responsibly, XDR can support GDPR compliance and reinforce trust in AI-powered defences. 

The bottom line 

AI in cybersecurity is no longer optional; threat actors have already embraced AI. They’re running automated phishing campaigns, developing adaptive malware designed to outsmart traditional defences, and deploying real-time evasion techniques.  

Defenders need to catch up while also ensuring compliance and maintaining digital trust. Traditional signature-based detection misses advanced threats that behavioural AI catches with 98% accuracy. The question isn’t whether cybersecurity teams should adopt AI, it’s how AI can be adopted effectively and securely before attackers gain a permanent advantage. The answer lies in deploying AI in ways that strengthen defences without introducing new risks.  

The winning approach is straightforward: build secure by design tools, not data pipelines. Use AI to generate scripts, create dashboards, and automate configurations while keeping sensitive data local rather than processing this data elsewhere. Organisations that master this tool-building approach will gain AI’s defensive advantages without the compliance headaches, regulatory penalties, or customer trust issues that come with external data sharing or exposed attack surfaces. 

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03636
$0.03636$0.03636
+1.87%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Thyroid Eye Disease (TED) Treatments Market Nears $4.3 Billion by 2032: Emerging Small Molecule Therapies Targeting Orbital Fibroblasts Drive Revenue Growth – ResearchAndMarkets.com

Thyroid Eye Disease (TED) Treatments Market Nears $4.3 Billion by 2032: Emerging Small Molecule Therapies Targeting Orbital Fibroblasts Drive Revenue Growth – ResearchAndMarkets.com

DUBLIN–(BUSINESS WIRE)–The “Thyroid Eye Disease Treatments Market – Global Forecast 2025-2032” report has been added to ResearchAndMarkets.com’s offering. The thyroid
Share
AI Journal2025/12/20 04:48
Virtus Equity & Convertible Income Fund Announces Special Year-End Distribution and Discloses Sources of Distribution – Section 19(a) Notice

Virtus Equity & Convertible Income Fund Announces Special Year-End Distribution and Discloses Sources of Distribution – Section 19(a) Notice

HARTFORD, Conn.–(BUSINESS WIRE)–Virtus Equity & Convertible Income Fund (NYSE: NIE) today announced the following special year-end distribution to holders of its
Share
AI Journal2025/12/20 05:30
Fed rate decision September 2025

Fed rate decision September 2025

The post Fed rate decision September 2025 appeared on BitcoinEthereumNews.com. WASHINGTON – The Federal Reserve on Wednesday approved a widely anticipated rate cut and signaled that two more are on the way before the end of the year as concerns intensified over the U.S. labor market. In an 11-to-1 vote signaling less dissent than Wall Street had anticipated, the Federal Open Market Committee lowered its benchmark overnight lending rate by a quarter percentage point. The decision puts the overnight funds rate in a range between 4.00%-4.25%. Newly-installed Governor Stephen Miran was the only policymaker voting against the quarter-point move, instead advocating for a half-point cut. Governors Michelle Bowman and Christopher Waller, looked at for possible additional dissents, both voted for the 25-basis point reduction. All were appointed by President Donald Trump, who has badgered the Fed all summer to cut not merely in its traditional quarter-point moves but to lower the fed funds rate quickly and aggressively. In the post-meeting statement, the committee again characterized economic activity as having “moderated” but added language saying that “job gains have slowed” and noted that inflation “has moved up and remains somewhat elevated.” Lower job growth and higher inflation are in conflict with the Fed’s twin goals of stable prices and full employment.  “Uncertainty about the economic outlook remains elevated” the Fed statement said. “The Committee is attentive to the risks to both sides of its dual mandate and judges that downside risks to employment have risen.” Markets showed mixed reaction to the developments, with the Dow Jones Industrial Average up more than 300 points but the S&P 500 and Nasdaq Composite posting losses. Treasury yields were modestly lower. At his post-meeting news conference, Fed Chair Jerome Powell echoed the concerns about the labor market. “The marked slowing in both the supply of and demand for workers is unusual in this less dynamic…
Share
BitcoinEthereumNews2025/09/18 02:44