Daily Cyber News – October 13th, 2025

This is today’s cyber news for October 13th, 2025. You can also subscribe to the newsletter and view the archive of previous headlines at daily cyber news dot com.

What this means: This isn’t just a web bug; it’s an enterprise-resource-planning problem that can disrupt payroll, vendors, and reporting if data is copied or altered. Organizations with legacy E B S, heavy customizations, or weak network segmentation are most exposed. For leaders: expect legal, contractual, and audit impacts if vendor or employee data left your control. For defenders: prioritize external-facing E B S, application-tier nodes, and database links; assume credential theft and schedule credential rotations. Signals to watch include unusual concurrent logins to E B S responsibilities in E B S audit logs, and atypical outbound volumes from app tiers to unfamiliar I P addresses in firewall or NetFlow.

Recommendation: Apply vendor fixes or compensating controls now; if patching lags, remove internet exposure, enforce W A F rules for E B S paths, rotate integrated credentials, and verify no staged data jobs within forty-eight hours.

What happened: SonicWall reported that threat actors used valid credentials to access some customer environments and view backup configuration metadata for firewalls and management appliances. The incident coincides with broader exploitation against S S L V P Ns across vendors. Exposed details can include device names, policies, and stored job settings, which help attackers plan lateral movement or recovery sabotage. SonicWall invalidated tokens, notified customers, and hardened its cloud services while investigations continue.

What this means: Backup jobs quietly map your network and reveal what matters most. Managed service providers and multi-tenant firewall admins are most exposed because of scale and shared workflows. For leaders: treat this like a potential blueprint leak and require a restoration drill to prove resilience. For defenders: rotate all stored credentials, review single-sign-on integrations, and check for anomalous logins from new autonomous systems. Signals to watch include creation of new A P I tokens or admin accounts in management audit logs, and failed backup jobs or altered schedules in backup server logs.

Recommendation: Reset secrets tied to SonicWall services, enforce M F A everywhere, restrict management access by source I P, and verify clean, offline recovery points by performing a test restore.

What happened: A critical path traversal leading to local file inclusion—practical remote code execution—is being exploited against Gladinet CentreStack and Triofox servers. Attackers can read sensitive files, steal secrets, and run code within the application context. Both cloud and on-prem deployments are impacted, with reports of exploitation preceding public disclosure. The vendor published guidance, but many instances remain exposed due to internet-facing file-sharing workflows and tight patch windows.

What this means: File-sharing gateways aggregate documents and tokens; compromise can hand over mapped network paths and authentication material. Mid-market firms with small I T teams and M S P-hosted instances are most exposed. For leaders: anticipate potential data-handling notifications to customers and partners if shares include regulated data. For defenders: isolate affected hosts, revoke tokens, and review service accounts tied to storage backends. Signals to watch include suspicious downloads of configuration files in web server logs, and new executable drops inside application directories flagged by endpoint detection.

Recommendation: Patch or apply vendor mitigations immediately; if delayed, block external access to the portal, rotate secrets, and verify there are no new admin users, scheduled tasks, or web shells within twenty-four to forty-eight hours.

What happened: A botnet dubbed “Aisuru” launched distributed denial-of-service attacks peaking near thirty terabits per second, surpassing prior records. Operators combined high-bandwidth nodes and misused infrastructure inside several U S ISPs to amplify traffic. Targets included gaming, finance, and hosting providers. The mix of volumetric floods and application-layer bursts stressed auto-scaling and upstream scrubbing centers.

What this means: Record-scale floods can briefly overwhelm even well-protected edges, causing latency and outages that customers notice. Telecoms, gaming platforms, and any service with global user bases are most exposed. For leaders: confirm your D D o S contract covers multi-terabit, multi-vector events and rapid upstream engagement. For defenders: pre-stage B G P diversion, tune rate-limits, and cache critical pages; coordinate now with transit providers. Signals to watch include sudden surges in packets per second from unusual internet exchange points in network telemetry, and H T T P five-oh-three spikes paired with short CPU saturation in app and load balancer metrics.

What happened: Apple increased its maximum Security Bounty payout to two million dollars for zero-click remote code execution vulnerabilities that bypass platform defenses. The program clarifies triage timelines and broadens eligible categories, aiming to attract elite researchers who might otherwise sell to brokers. The move comes amid continued spyware targeting of i O S and mac O S users, including journalists, executives, and officials.

What this means: Bigger payouts can shift exploit economics and draw high-end findings into coordinated disclosure. Mobile-first organizations, high-risk executives, and regulated industries are most exposed to zero-click threats across messages, media, and wireless stacks. For leaders: budget for faster mobile O S adoption and user-experience tradeoffs when rapid mitigations ship. For defenders: raise priority on mobile telemetry, mobile-device-management enforcement, and lock-down modes for at-risk roles. Signals to watch include sudden iMessage-related crash logs in device diagnostics, and M D M alerts for new rapid security responses outside normal cadence.

What happened: A financially motivated group that Microsoft tracks as Storm twenty-six fifty-seven is breaking into human-resources software accounts and changing direct-deposit details so paychecks route to mule bank accounts. Access usually comes from password reuse, phished sessions, or malicious OAuth—open authorization—apps with payroll scopes. Once inside, the actors add new payees, swap routing numbers, and create forwarding rules to hide confirmations. Several campaigns hit North America during recent payroll cycles, taking advantage of weak multi-factor settings and overly broad admin rights.

Recommendation: Require step-up verification and dual approval for any bank-detail change; if that isn’t possible today, disable self-service edits temporarily and verify every change with an out-of-band employee callback.

What happened: Researchers uncovered a cluster of one hundred seventy-five npm packages seeded over weeks to smuggle phishing components and steal credentials from developers and continuous-integration environments. The packages used typosquats of popular libraries, post-install scripts to exfiltrate tokens, and rotating throwaway domains. Some artifacts targeted environment variables used by cloud and Git hosting, raising the chance of supply-chain impact beyond developer workstations.

What this means: Package registries remain a prime delivery channel into software pipelines. Teams with permissive npm install policies, broad personal-access-token usage, and unpinned dependencies are most exposed. For leaders: demand an inventory of third-party packages and a gate on who can add dependencies. For defenders: enable lockfiles, block post-install scripts by default, and scan artifacts before publish and before deploy. Signals to watch include new packages with few downloads but many versions in hours from registry monitoring, and builds invoking curl or wget from npm lifecycle hooks in CI logs.

Recommendation: Route external packages through an allowlist and private mirror; if immediate blocking breaks builds, freeze lockfiles and verify no npm lifecycle scripts execute during install.

What happened: Fortra published an exploitation timeline for a GoAnywhere managed-file-transfer flaw—C V E twenty twenty-five dash one zero zero three five—detailing discovery, patch release, and in-the-wild use by data-theft actors. The issue enables unauthenticated access to administrative functions in certain configurations, allowing file exfiltration and persistence user creation. The chronology helps customers align incident windows, validate log retention, and see when mass scanning started.

What this means: M F T platforms concentrate sensitive data flows and partner exchanges; abuse affects customers and vendors. Enterprises with third-party integrations and automated trading partners are most exposed. For leaders: require formal notification to key customers and suppliers if transfers could have been accessed. For defenders: correlate upgrade dates to anomalous admin actions, rotate credentials, and reissue partner A P I keys. Signals to watch include creation of new administrative users outside change windows in GoAnywhere audit trails, and unusual outbound transfers to non-partner addresses in firewall and M F T logs.

Recommendation: Patch to the fixed release, rotate all credentials and tokens, and verify partner allowlists; if downtime blocks patching, isolate the M F T behind V P N-only access and review admin audits daily.

What happened: Ransomware crews have been observed deploying Velociraptor—the open-source digital forensics and incident response agent—to persist, collect host data, and execute tasks while masquerading as legitimate forensics. Adversaries sideload signed binaries, create stealthy collections, and use the client-server model to run commands at scale, blending with blue-team activity. Normal artifacts can mask malicious use, complicating investigations.

What this means: Trust in defensive tooling can be exploited to bypass detection and policy reviews. Enterprises with broad endpoint-security exemptions for DFIR utilities and loose signing policies are most exposed. For leaders: enforce an allowlist for sanctioned forensic tools with explicit ownership and logging. For defenders: baseline Velociraptor server certificates and storage paths, and alert on new deployments outside incident-response tickets. Signals to watch include fresh clients registering from non-IR subnets in Velociraptor logs, and collections invoking native archivers and credential dumpers in endpoint telemetry.

Recommendation: Restrict DFIR tool execution to controlled IR enclaves; if that’s not feasible now, monitor for new Velociraptor configurations and verify all server fingerprints against an approved inventory.

What this means: It’s a consumer-to-enterprise bleed-over risk: stolen personal data fuels account takeover at work. State and local agencies, financial services, and employers with broad bring-your-own-device adoption are most exposed. For leaders: coordinate with H R to warn staff and provide a safe reporting path for suspicious tax messages. For defenders: enable SMS-phishing reporting in mobile device management and block newly registered domains at the gateway. Signals to watch include spikes in DNS queries for tax-themed look-alikes in DNS logs, and sudden mobile enrollments or profile removals in M D M logs.

Recommendation: Push an all-hands advisory with screenshots today; if controls are limited, block newly registered domains for seven to fourteen days and verify no direct-deposit changes occurred without callback verification.

What happened: The Clop ransomware group listed Harvard as a victim on its leak site, claiming access to sensitive data tied to university operations. As with most name-and-shame posts, they shared samples and a countdown to pressure negotiations. Early claims point to administrative and research files rather than classroom content. Universities run diverse systems and third-party platforms, giving multiple paths to data. Harvard said it’s investigating with partners while keeping details limited.

What this means: Higher education has sprawling networks, legacy systems, and tight budgets—conditions that favor data theft and extortion. Institutions with many vendors, labs, and independently managed servers are most exposed. For leaders: prepare for notifications and inquiries from donors, students, and research partners if sensitive records were accessed. For defenders: review identity federation, file-share exposure, and service accounts used by research tools. Signals to watch include spikes in archive creation from research file servers in endpoint logs, and outbound transfers to unfamiliar autonomous systems in NetFlow or firewall telemetry.

Recommendation: Execute your ransomware-extortion playbook now; if scope is unclear, isolate high-value file shares, rotate service credentials, and verify immutable backups for critical academic and research data.

What happened: The U.S. Cybersecurity and Infrastructure Security Agency added Grafana’s path-traversal vulnerability—C V E twenty twenty-one dash four three seven nine eight—to the Known Exploited Vulnerabilities catalog, signaling observed exploitation. The flaw allows unauthenticated file reads on affected versions, which attackers use to pull secrets and pivot. KEV inclusion starts the remediation clock for federal agencies and is a practical signal for enterprises to re-check exposure.

What happened: Windows 11 version 23 H 2 Home and Pro will hit end of support in about thirty days, ending monthly security updates. Systems missing lifecycle milestones stop receiving fixes, leaving users exposed to newly disclosed vulnerabilities. Enterprise editions may have different timelines, but unmanaged Home or Pro fleets often linger in small businesses and contractor devices.

What this means: End of support creates a silent attack surface—everything continues to run, but protections stop. Small and midsize businesses, contractors, and Bring-Your-Own-Device programs are most exposed. For leaders: budget and mandate upgrades, because E O S devices will fail compliance checks and cyber-insurance questionnaires. For defenders: inventory Windows versions, enforce Feature Update rings, and apply upgrade blocks only where documented. Signals to watch include patch-compliance drift on 23 H 2 devices in endpoint management and increases in blocked exploit attempts against unpatched components in endpoint detection.

Recommendation: Schedule in-place upgrades to supported Windows 11 builds this week; if users can’t upgrade, remove privileged access from E O S devices and verify conditional access blocks risky sign-ins.

What happened: A critical authentication-bypass flaw in the Service Finder WordPress plugin—under active exploitation—lets unauthenticated attackers create or escalate accounts and take over sites. Observed attacks create admin users, alter payment settings, and inject malicious content. Many sites rely on the plugin for booking and marketplace features, increasing brand and revenue impact for S M Bs.

What this means: WordPress plugins remain frequent entry points for full site compromise, defacement, and card-skimming. Small and mid-market companies with self-hosted WordPress and auto-updates disabled are most exposed. For leaders: treat this as a customer-facing outage and brand risk; plan incident comms if the site processes payments or P I I. For defenders: disable the plugin until patched, validate admin lists, and scan themes and plugins for unfamiliar files. Signals to watch include sudden creation of new admin users in WordPress logs and unexpected changes to payment or webhook settings in plugin or gateway portals.

Recommendation: Update or disable Service Finder immediately; if no patch is available, block access to affected endpoints, restore from a clean backup, and verify no rogue admins, web shells, or checkout changes.

What happened: Telemetry shows a large, coordinated surge in Remote Desktop Protocol scanning and credential-stuffing attempts from more than one hundred thousand I P addresses worldwide. Attackers rotate common username and password pairs, target exposed ports, and try to bypass basic lockout rules. The activity coincides with new lists of internet-facing R D P endpoints circulating among access brokers.

What this means: R D P remains one of the fastest paths to ransomware and hands-on-keyboard intrusions. Any organization with R D P open to the internet or weak multi-factor on remote access is at risk. For leaders: remove internet-exposed R D P from your risk register by mandate, not suggestion. For defenders: require V P N or Zero Trust Network Access, enforce M F A, and watch for password-spray indicators. Signals to watch include authentication attempts from many countries in short windows in domain-controller logs, and spikes in Network Level Authentication failures in Windows Security logs or your S I E M.

Recommendation: Close public R D P now; if you must keep it, changing ports is only a stopgap—enforce M F A via a secure gateway and verify lockouts and alerting for brute-force patterns within twenty-four hours.

What happened: Analysts report new Stealit stealer campaigns packing payloads with Node dot J S “Single Executable Applications,” delivered through malvertising and trojanized installers for popular tools. The bundle can evade simple static checks, unpack in memory, and raid browser cookies, password vaults, and crypto wallets before exfiltration to rotating command-and-control. Operators target gamers, V P N seekers, and small-business users who sideload from sponsored search or S E O-poisoned sites.

What this means: Attackers are abusing convenience features in modern runtimes to hide commodity theft at scale. S M Bs with unmanaged endpoints and any org that allows self-install software are most exposed. For leaders: assume personal-use PCs and lightly managed contractor laptops can become pivots into corporate apps. For defenders: block ad-based downloads, enforce application control, and add Y A R A or sandbox coverage for Node dot J S S E A artifacts. Signals to watch include parentless processes spawning Node binaries from temp paths in endpoint telemetry, and bursts of POST requests after browser credential enumeration in proxy logs.

Recommendation: Restrict installs to an allowlisted catalog; if that’s not feasible today, block ad-driven downloads at D N S and verify your E D R detects Node dot J S S E A packers and browser-data harvesting behaviors.

What happened: Spanish authorities dismantled “G X C Team,” a phishing-as-a-service group that sold turnkey kits, hosting, and mule services. Police seized infrastructure, payment rails, and thousands of cloned login pages used to harvest banking and email credentials. The operation followed months of victim reporting and cross-border coordination, disrupting active campaigns against E U consumers and businesses.

What this means: Demand-side fraud can reconstitute, but losing operators, servers, and templates creates a temporary drop in kit quality and availability. Financial institutions, telcos, and logistics brands spoofed by these kits are most exposed to rebound waves using fresh domains. For leaders: expect a brief lull followed by copycats; keep customer-notification playbooks warm. For defenders: tighten takedown S L As and pre-block look-alike domains via registries and secure email gateways. Signals to watch include spikes in newly registered domains containing brand strings in threat-intel feeds, and HTML forms posting directly to I P addresses in mail and proxy logs.

Recommendation: Coordinate with your brand-protection vendor on preemptive blocks; if you lack coverage, subscribe to a look-alike domain feed and verify abuse-report pathways remove phishing within twenty-four hours.

What happened: VirusTotal announced changes to its access tiers and contributor program, clarifying who can upload, query, and retrieve advanced telemetry. The update aims to curb abuse and reward meaningful sharing from security teams and vendors. A P I limits, data-use policies, and contributor verification are tightening, with emphasis on privacy and responsible use.

What this means: Many blue teams rely on VirusTotal for triage and pivots; policy shifts can hit workflows, automation, and budgets. Smaller teams on legacy plans and heavy pipeline users are most exposed to rate or access changes. For leaders: plan for tier adjustments and ensure legal reviews of sharing align with policy. For defenders: audit dependencies in S O A R or S I E M that assume specific V T quotas, and stage fallbacks to alternative reputation sources. Signals to watch include automation job failures on V T A P I calls in S O A R logs, and analyst time-to-triage creeping up after tier changes in ticket metrics.

Recommendation: Inventory where V T is embedded in tooling, confirm new quotas and terms, and verify pipelines degrade gracefully with cached verdicts and secondary feeds.

What happened: Separate from the zero-day chain earlier, Oracle E-Business Suite faces C V E twenty twenty-five dash six one eight eight four, enabling unauthenticated access to sensitive endpoints in some configurations. Successful exploitation can expose finance, H R, and supply-chain data or facilitate privilege escalation across modules. Risk is highest for internet-exposed instances and environments with complex customizations that slow patching.

What this means: This broadens the E B S attack surface beyond a single path and raises the odds that partial mitigations miss real exposure. Enterprises with legacy E B S, S S O bridges, and back-end database integrations are most exposed. For leaders: treat E B S as a regulated system of record and demand proof of compensating controls until patched. For defenders: restrict E B S to internal networks, enforce W A F rules, and audit responsibilities for over-privilege. Signals to watch include access to responsibility menus by anonymous users in E B S audit logs, and sharp increases in SELECTs from H R or G L schemas in database audit.

Recommendation: Apply the specific fix as soon as available and harden exposure; if delayed, remove public access, enable strict W A F signatures for E B S paths, and verify no anonymous role expansion within forty-eight hours.

What happened: Ukraine’s cyber authorities report a surge in operations where Russia-linked actors use generative and assistive A I to craft lures, automate recon, and tune malware configs. Campaigns pair polished spear-phishing with faster infrastructure rotation and payload notes tailored to victim sectors, targeting government, media, and critical infrastructure.

What this means: A I shortens iteration cycles for capable adversaries, raising the baseline quality of phishing and operational security. Public sector bodies, media outlets, and critical-infrastructure operators with high social-engineering exposure are most at risk. For leaders: increase the cadence of user training and require out-of-band verification for sensitive requests. For defenders: add content-agnostic controls—deterministic link isolation and attachment detonation—and tighten domain age and reputation thresholds. Signals to watch include surges in highly personalized emails from newly registered domains in secure email gateways, and frequent D K I M or Sender I D passes paired with unusual reply-to routes in D M A R C reports.

Recommendation: Ramp up phishing simulations and harden mail gateways now; if controls lag, enforce link isolation for external mail and verify finance and identity workflows require secondary, out-of-band approvals.

That’s the BareMetalCyber Daily Brief for October 13th, 2025. For more, visit BareMetalCyber dot com. You can also subscribe to the newsletter and view the archive of previous headlines at daily cyber news dot com. We’re back tomorrow.

Daily Cyber News – October 13th, 2025
Broadcast by