Blog

  • EP: 2  Karmine Kompass ,                     AI & The Future of Jobs | Usha Rangaraju

    EP: 2  Karmine Kompass , AI & The Future of Jobs | Usha Rangaraju

    Welcome to a pivotal conversation on the Karmine Kompass podcast, hosted by Santosh Sirur.

    In this episode, we sit down with AI expert, Usha Rangaraju, an “accidental AI engineer” who transitioned from investment banking to a successful consulting career in AI. Usha shares her insights on the rapid evolution of Artificial Intelligence, its impact on the job market, and how India is uniquely embracing this technological wave.

    Usha discusses the foundational skills required for AI, the shift from coding to soft skills like resilience and critical thinking, and provides predictions for the future of work by 2030.

    Key Discussion Points & Timestamps:

    02:32 | Becoming an Accidental AI Engineer: Usha Rangaraju’s journey into AI from investment banking.

    04:58 | Landing an AI Gig: The crucial role of networking and community in securing consulting roles.

    06:24 | Consulting vs. Full-Time Roles: The difference in required skill sets for end-to-end consulting versus specialized full-time AI roles. 08:21 | The Foundation of AI: Why mathematics (probability, statistics, linear algebra) is non-negotiable.

    09:22 | The Role of Coding: Why coding is still important for debugging, even with the rise of “wide coding” tools and AI agents.

    11:09 | AI Revolutionizing Business: Examples of AI transforming data visualization (e.g., using Gemini or ChatGPT) and content creation, leading to massive reduction in hired content creators.

    16:25 | AI in India’s Public Sector: Implementation in policing (helmet/license plate detection) and the fully AI-powered DigiYatra application at airports.

    19:27 | Future AI Focus in India: High potential in semiconductor manufacturing for early defect identification and massive cost reduction in drug discovery and diagnostics.

    21:57 | The Dual-Edged Sword of AI: AI’s capacity to both create and detect defects/fake information.

    23:15 | Highest ROI from AI: Why monetary ROI in banking might be lower due to RBI restrictions and why the manufacturing sector is set for the highest ROI.

    27:54 | 2030 Prediction: A huge reduction (95%) in traditional software engineering jobs is expected, while transportation, including flying cars, will be revolutionized.

    30:09 | The Most Valuable Skills: Soft skills like resilience, adaptability, critical thinking, and reasoning are essential for navigating the uncertain future.

    32:31 | Ethical AI & Regulation: Discussions around NASCOM’s Responsible AI white papers and the India AI Mission, including the stringent policies of the DPDP Act.

    35:54 | Jobs of the Future (Next 7-10 Years): Huge demand is predicted for core sectors in India, including semiconductors, manufacturing, electronics, communications, and automotive.

    39:47 | AI for Autism Advocacy: Using technology to address the societal “taboo” around autism and employing AI-powered wearables (Google Android XR) and robots (Milo) to train emotional understanding in neurodiverse individuals.

    48:00 | Debunking the Myth: Data scientists do not necessarily earn more than every other skilled engineer; high value comes from having skills that cannot be replaced.

    Connect with Karmine (Founders):

  • EP : 1 Karmine Kompass , Future of Work, Powered by People | Shreyas Tonse, Zensible

    EP : 1 Karmine Kompass , Future of Work, Powered by People | Shreyas Tonse, Zensible

    Welcome to the inaugural episode of Karmine Kompass: Pivotal Conversations!

    We kick off our journey to excellence with Shreyas Tonse of Zensible, the world’s first Total Experience (Tx) company in HR technology. This conversation is your roadmap to understanding the strategic shift needed to succeed in the digital-first era.

    We dive deep into why enterprises need to stop viewing HR software as fragmented tools and start treating it as a unified, strategic ecosystem that maximizes business value and employee experience.

    Connect with Shreyas Tonse & Zensible:

    About Karmine Consulting:

    Karmine Consulting is dedicated to guiding leaders through pivotal conversations. Subscribe for weekly insights that inspire, ignite, and align your business strategy.

    #KarmineKompass #ShreyasTonse #Zensible #HRTech #TotalExperience #LeadershipPodcast #BusinessStrategy #KarmineConsulting #AIinHR #FutureOfWork

  • Agentic AI: A New Era for Finance Operations

    Agentic AI: A New Era for Finance Operations

    Changing Dynamics in Finance Operations

    The world of finance operations is undergoing rapid transformation. Over the past decade, organizations have pursued greater efficiency, moving from manual processes to transactional Robotic Process Automation (RPA), and then to holistic hyper-automation. While each phase has delivered incremental gains, the next evolutionary leap is not merely about doing things faster but about doing things autonomously and intelligently.

    Today, Agentic AI – autonomous AI “agents” that can perceive, reason, and act, is emerging as the next evolutionary step. Industry experts note that this transition to agentic AI is a natural progression in the automation journey, building on the foundations of machine learning, traditional AI models, and generative AI. In fact, agentic AI is touted as “the operating logic of tomorrow’s enterprise,” promising new levels of cost efficiency and growth for those who embrace it.

    What is Agentic AI?

    Agentic AI refers to intelligent systems designed to autonomously accomplish specific goals with limited human intervention. The difference becomes clear when comparing their operating models:

    • Traditional Automation/RPA: Follows predefined rules or scripts; great for repetitive tasks but brittle when conditions change.
    • Generative AI: Produces outputs (text, code, etc.) in response to prompts; powerful for content and analysis, yet it’s largely reactive.
    • Agentic AI: Goes further by being proactive. It can set objectives, plan multi-step actions, make independent decisions, and adapt to new information. An agentic AI is less like a calculator and more like a junior colleague that can handle tasks end-to-end. Importantly, it operates on a goal and feedback loop rather than one prompt at a time

    This ability to carry out multi-step processes and integrate with enterprise systems is a hallmark of agentic AI.

    Key Attributes of Agentic AI – The Five Pillars

    Agentic AI is defined by five core pillars that set it apart from traditional automation and earlier AI systems:

    • Goal-driven: Agentic AI operates with clear objectives and continuously aligns its actions to achieve defined outcomes (e.g., reduce accounts payable cycle time), keeping the end goal central across all activities.
    • Multi-step Planning & Orchestration: It can break complex objectives into sequenced actions, coordinate multiple tools (e.g., ERP, data warehouse, GenAI for analysis) or sub-agents, and execute end-to-end workflows through an iterative think-plan-act-evaluate-refine loop.
    • Autonomous Decision-Making: Unlike static automation, the agent makes independent, context-aware decisions and manages exceptions dynamically without needing step-by-step human intervention, enabling true 24/7, near-continuous operations.
    • Continuous Learning & Adaptation: Through feedback and learning mechanisms, agents models improve over time, adapting to new scenarios, regulatory changes, and process variations thus increasing accuracy and outperforming static rule-based automation.
    • Transparency, Auditability & Trust: Built-in explainability, robust audit trails, and human oversight ensure decisions are traceable, compliant, and reviewable, upholding the highest standards of governance.

    Together, these pillars allow agentic AI to function as a reliable, autonomous colleague in finance, capable of understanding context, executing complex processes, learning from outcomes, and operating transparently within defined guardrails.

    Why Agentic AI in Finance?

    The business case for Agentic AI in finance lies in its fit with the realities of modern financial operations – high data volumes, repetitive processes, time-critical decisions, and strict compliance requirements.

    • End-to-end automation: Agents can potentially orchestrate entire finance processes, not just tasks, reducing handoffs and freeing teams for higher-value work.
    • Faster decision-making: Real-time analysis and execution compress cycle times, enabling instant routine decisions and quicker insights for risk, treasury, and control functions.
    • Improved accuracy and compliance: Reduced manual intervention lowers error rates, while consistent policy application and anomaly detection strengthen compliance and fraud detection.
    • Scalable, 24/7 operations: Agents can operate continuously and scale seamlessly during peak periods without proportional increases in headcount
    • Adaptive handling of complexity: Unlike rigid automation, Agents learn, manage exceptions, and adjust workflows as scenarios change. Of course, with sufficient ‘human-in-the-loop’ interventions.

    In essence, Agentic AI allows finance teams to achieve more throughput and intelligence with less manual effort – cutting costs, improving resilience, and shifting human focus from routine execution to analysis, strategy, and value creation.

    The Architecture: Moving Beyond Silos

    In an agentic finance model, the CFO’s role expands from a sponsor to an architect. CFOs define the outcomes agents are accountable for, the risk boundaries they must respect, and the governance structures that ensure trust.

    The true complexity and power of this era lie in the Agentic Architecture. It is not about deploying a single “super-bot,” but rather orchestrating a federation of specialized, coordinated agents that communicate seamlessly.

    Consider the complexity of a global supply chain finance process. This might require:

    • Handling invoice matching and payment initiation within the ERP.
    • Optimizing cash flow and managing foreign exchange exposure based on payment timing.
    • Continuously screening vendors and transactions against sanctions lists and internal policy.

    These agents operate like a well-drilled team, sharing context and passing execution authority based on their specialized skills. This architectural shift enables organizations to break down functional silos, achieving true end-to-end process automation and optimization that traditional RPA could never manage.

    Key Use Cases and Opportunities in Finance

    1.Dynamic Forecasting Planning & Analysis (FP&A): One of the most impactful areas is financial planning and analysis. Agentic AI can turn traditional periodic forecasting into a continuous, real-time activity. For example, AI agents can integrate data from ERP systems, market feeds, and spreadsheets to constantly update forecasts and run “what-if” scenarios. This creates a kind of digital financial twin that can simulates outcomes.

    Agents can also provide nuanced analysis, spotting trends or anomalies in financial data that warrant attention. In essence, forecasting becomes more precise and proactive, with AI continuously recalibrating projections.

    Impact: Forecasting becomes more precise, proactive, and directly actionable, dramatically improving resource allocation and capital efficiency.

    2. Procure-to-Pay (P2P) Orchestration: AI agents can streamline invoice handling, for example, by automatically pulling data from incoming invoices, cross-validating it against purchase orders and goods receipts, and flagging any discrepancies. Tedious tasks like invoice coding, approval routing, and journal entries can be handled start-to-finish by an agent.

    Impact: Lower error rates, accelerated payment cycles, and a shift of A/P staff from data entry to exception resolution.

    3. Accelerated Vendor onboarding & Due Diligence: Multi-agent workflows can accelerate KYC/KYB, sanctions screening, and risk scoring, reducing onboarding from days to minutes while enabling continuous monitoring and robust audit trails. Imagine a team of AI agents working together: one agent gathers the vendor’s public data and documents, another cross-checks them against databases (for sanctions, politically exposed persons, adverse media), and a third evaluates the risk level or compliance requirements, all with no human handoffs in between.

    By handling the grunt work of due diligence and doing it thoroughly and consistently Agents can help onboard vendors faster while enhancing compliance. Compliance officers can then focus on the truly suspicious cases rather than sifting through false positives.

    Impact: Onboarding timelines reduced from days to minutes, robust and continuous monitoring, and allowing compliance officers to focus solely on high-risk, ambiguous cases.

    4. Continuous Financial close & consolidation: The accounting close process (monthly, quarterly, annually) involves aggregating data from various systems, reconciling accounts, and preparing consolidated financial statements. It’s typically a labor-intensive crunch. In one case, a manufacturing company deployed an AI agent to manage its month-end close. The agent autonomously gathered trial balances from multiple ERPs, applied matching rules to reconcile entries, and even proposed adjusting journal entries for the finance team to review. It ultimately cut the close cycle by roughly 50%.

    This example highlights how an agent can take over repetitive close tasks and execute them faster and more accurately. Additionally, because the agent works continuously, it enables a continuous close environment.

    Impact: Organizations have adopted an Agentic AI solution to manage their month-end close, cutting the cycle time by approximately 50% and freeing up accounting staff for variance analysis.

    Conclusion: Embracing the Agentic Future

    Agentic AI marks a fundamental, irreversible shift, transforming finance from an operations utility into an agile, strategic growth engine. Early adopters are already seeing material gains, including faster closes, meaningful cost reductions, and improved accuracy, while freeing finance teams to focus on strategy, analysis, and innovation rather than execution.

    Adoption, however, is not plug-and-play. It requires strong governance, transparency, ethical guardrails, and deliberate change management to ensure trust, control, and human oversight remain intact. When these foundations are in place, the operational and strategic upside far outweighs the risks.

    Looking ahead, finance functions will not simply become faster or more efficient, they will become decisively intelligent and increasingly autonomous. Agentic AI marks the inflection point where finance shifts from executing processes to continuously steering outcomes, operating with speed, precision, and foresight that traditional models cannot match.

    Organizations that invest early and responsibly will secure enduring advantages in cost efficiency, resilience, and decision quality transforming finance from a transactional back office into a strategic, always-on growth engine. The era of autonomous finance is no longer theoretical; it is already taking shape. Those who embrace it with strong governance, clear intent, and human judgment at the core will not only lead the transition, but help set the standards by which the future of finance will be defined.

  • When Business Accounts Become Mules: The New Battlefield in Financial Fraud

    When Business Accounts Become Mules: The New Battlefield in Financial Fraud

    For some time now, the “money mule” typologies have largely involved vulnerable individuals who were persuaded or coerced into moving illicit funds. Today, that typology is shifting into exploiting legitimate business current accounts, especially those belonging to MSMEs, to layer and route illicit funds at scale. This evolution is not just tactical; rather, it represents a well thought out reconfiguration of how criminal networks exploit the trust fabric underpinning the financial system.

    Recent cases reported across Indian banks highlight how MSME accounts are being hijacked, rented, or compromised to facilitate fast-moving, high-velocity transfers. This trend is accelerating, and financial institutions must re-evaluate their fraud detection and prevention strategies before systemic trust erodes any further.


    Business Accounts – New Mule Infrastructure

    1. Higher Transaction Thresholds

    Business current accounts routinely handle large-value transactions. A ₹3-5 lakh credit in an MSME account appears routine, whereas the same amount would seem anomalous in a retail account. This gives fraudsters a degree of anonymity through normalcy.

    2. Legitimacy and Established History

    Contrary to newly opened personal bank accounts, corporate entities generally come with a certain level of banking history, GST filings, payroll patterns, and vendor relationships. This legitimacy provides the necessary camouflage for fraudsters to move funds through current accounts.

    Often attributed as “Rent-a-Current-Account” model, struggling businesses, especially those with credit stress, rent their accounts for commissions where funds are layered through vendors, wallets, and forex channels before exiting the system.

    3. Lower Behavioural Predictability

    MSME activities differ dramatically across sectors based on their seasonality, client mixes, and growth cycles. This diversity makes it difficult for traditional transaction monitoring systems to establish a baseline for what “good” account behavior looks like.

    4. Insider or Peripheral Collusion

    Fraudsters capitalize on dormant partners, distressed business owners, accountants, or even compromised vendor relationships. In other cases, attackers gain access through identity compromise, or invoice-manipulation attacks.

    Criminal networks now favor “fewer, high-trust mule accounts” over a network of small retail mules, allowing them to transfer larger volumes with reduced exposure.

    5. Account Takeover via Business Email Compromise

    Cybercriminals compromise corporate email systems, intercept invoices, alter payment instructions, and quietly redirect funds into compromised or rented business accounts.

    6. Shell Firms Masquerading as Genuine MSMEs

    Criminals create fully documented shell companies, complete with incorporation proofs, basic trade activity, and GST registrations, to simulate legitimacy while acting as laundering pipelines.

    The common thread across all three is the exploitation of blind spots within traditional bank surveillance and due diligence procedures.


    Why Traditional Controls Fail

    1. Static KYC cannot keep up with dynamic risk

    KYC establishes identity at the time of onboarding or during periodic refresh, but businesses often evolve faster than the KYC cycle, sometimes into riskier entities. Without dynamic risk-refresh mechanisms or perpetual KYC procedures, banks remain blind to behavioural drift.

    2. Typical transaction monitoring typologies not designed for MSME complexity

    Rule-based transaction monitoring engines falter with MSMEs whose cash flows are non-linear, seasonal, and shaped by sector dynamics. As a result, generic rules either flood systems with false positives or miss detecting targeted mule activity.

    3. Lack of entity-resolution across accounts & identities

    A business is not a single account, rather it is an ecosystem of promoters, directors, accountants, devices, IPs, and counterparties. Legacy systems struggle to connect these signals and form a unified risk picture, analyzing each data point in isolation which creates blind spots that delay detection and prevents banks from recognizing coordinated or evolving threats across the wider business ecosystem.

    4. Limited Visibility Beyond the Bank’s Perimeter

    Fraud patterns often spread across institutions, but without consortium-level intelligence or federated learning programs, these signals stay under the radar. Fraudsters take advantage of this fragmentation, moving quickly between institutions to stay ahead of detection.


    Building Models that work – Our Perspective

    The surge in business-account mule activity highlights a crucial industry lesson: fraud cannot be solved through transaction monitoring alone. Detecting mule behavior, particularly in corporate accounts, requires multi-dimensional intelligence that connects digital signals, human context, and behavioural narratives.

    Karmine’s perspective centers on four essential pillars.

    1. Customer 360° : Moving Beyond Fragmented Risk Views

    A robust Customer 360° framework brings together identity, device, and behavioural signals across both retail and corporate profiles and integrates fraud and AML so that indicators such as account-takeover attempts or suspicious logins strengthen AML risk scoring. It also incorporates network-level intelligence to reveal links to shell firms, risky beneficiaries, or high-velocity counterparty rings.

    Traditional systems often treat fraud and AML as separate domains, even though mule activity sits directly at their intersection. A single, entity-level view can uncover risk patterns that often get missed in siloed systems.

    Only when a bank views the business as a single, holistic entity, rather than as a collection of accounts, can mule activity be detected in time.

    2. Early Risk Signals Appear Long Before Transactions Do

    Documentation inconsistencies, KYB anomalies, and behavioural red flags often emerge months before any transactional anomalies surface. These early signals provide valuable insight into whether a business is stable, legitimate, and operating as declared.

    Examples include mismatches between the stated nature of business and actual financial flows, templated or recycled incorporation documents, unexplained changes in ownership or authorized signatories, and income lines or operational footprints that do not match the speed of fund inflows. These indicators often hold predictive value and can highlight elevated risk before money movement becomes suspicious.

    To use this intelligence effectively, banks must integrate these non-transactional signals into their ongoing monitoring processes. When onboarding and KYB data is treated as one-time paperwork instead of continuous risk input, institutions lose early warning capabilities that can prevent misuse long before transactional behavior deteriorates.

    3. Relationship Managers – crucial interpreters of customer behavior

    For corporate and MSME segments, Relationship Managers (RMs) are a primary source of contextual understanding. They know their clients’ operational realities, seasonality, and market cycles, yet in most banks the RM layer remains disconnected from fraud and AML signals.

    To be effective, RMs need the ability to spot deviations between expected business behavior and actual transaction flows, escalate sudden shifts in volume, beneficiaries, or geographies, and validate whether a company’s banking behavior aligns with the patterns observed. Digital intelligence can detect anomalies, but only human context can explain them.

    4. Strong, Continuous KYC/KYB – A Non-Negotiable

    The shift from a legitimate business to a mule entity is often gradual, which makes static KYC frameworks insufficient on their own. A more continuous, risk-based KYB approach is needed, where updates are prompted by behavioural changes rather than waiting for a scheduled refresh.

    In practice, this means keeping an eye on sector-specific cash-flow patterns, checking whether the business model still appears viable, and periodically validating key details such as income sources, counterparties, staffing, and day-to-day operations. Simple, contextual risk scoring can help highlight when a business begins to deviate from its usual activity. In this model, understanding how a business operates becomes just as important as confirming who owns it.


    How Karmine Consulting can help

    For banks dealing with MSME portfolios, the real challenge is not just detecting mule accounts but understanding where and why the current system is blind. As a boutique AFC consulting firm, we aid institutions across some of their core considerations:

    • Governance & Risk Profile: Build a sharper, enterprise-level view of their MSME mule risk profile by identifying which sectors, clusters, ownership patterns, and transaction behaviors create the highest exposure.
    • Data: We aid in mapping data landscape end-to-end, assessing where relevant signals sit across KYC, GST data, account behaviors, trade documents, RM logs and counterparty flows and how much of this can be orchestrated to strengthen detection without waiting for multi-year modernization.
    • Process: We help refine processes for faster identification and cleaner reporting, redesign accountability structures across the three lines of defense, and define the RM/analyst skill sets needed to distinguish legitimate MSME churn from mule activity.
    • Tech: Finally, we help banks pinpoint the exact tech investments that will move the needle across entity resolution, network-graph analytics, document forensics, or continuous-KYC triggers.

    Through our interventions, we help ensure institutions build a scalable, intelligence-led MSME mule-detection capability rather than repurposing retail-focused controls

  • Digital Marketplace Scams: Follow the Money and Fight Back with AI

    Digital Marketplace Scams: Follow the Money and Fight Back with AI

    Introduction

    Digital marketplaces have revolutionised commerce by enabling instant global trade at scale. This same infrastructure that connects billions of buyers and sellers has also opened new territory for financial crime. Marketplace scams, from fake listings and cloned storefronts to payment diversion schemes, are now among the fastest-growing fraud typologies worldwide.

    According to a 2024 report, global scam losses exceeded US $1 trillion in 2023.

    A subsequent 2025 survey by the same organizations found that roughly 23% of adults worldwide reported losing money to a scam. In the United States, scam-related fraud incidents rose 56% in 2024 and financial losses more than doubled. Scams have now overtaken traditional card abuse as the dominant form of online fraud.

    For banks, this escalation is significant. Marketplace scams intersect directly with formal payment systems, prompting regulatory scrutiny and placing pressure on financial institutions to treat these typologies as part of the broader financial crime agenda. As consumer trust erodes, regulators are tightening oversight and financial institutions are racing to strengthen defences. Increasingly, that defence is AI-powered, combining fraud prevention with the forensic discipline of “follow the money.”

    The Rising Threat of Digital Marketplace Scams

    Marketplaces thrive on accessibility which makes them ideal hunting grounds for organized fraud rings posing as legitimate sellers. For financial institutions, digital marketplaces represent a high-velocity fraud environment. Criminal networks exploit automation, anonymity, and the high-volume transaction flow to strike quickly and disappear before detection.

    Common tactics include –

    • Non-delivery and counterfeit goods: Fake online stores offer large discounts, take payment, and disappear without sending the product.
    • Seller impersonation: Scammers copy or hack trusted seller accounts and redirect buyers to pay outside the platform.
    • Phishing and fake support: Criminals pose as marketplace staff or buyers to trick users into sharing passwords or payment details.
    • Overpayment and refund scams: Fraudsters overpay with stolen cards and ask for a refund before the original payment is reversed.

    Behind these familiar fronts lies a professionalized underground economy. Fraud operations share data, reuse templates, and now deploy generative AI to create fake storefronts, invoices, and customer chats.

    Interpol now estimates cyberfraud generates around $3 trillion annually surpassing the profits of the global drug trade.

    A recent Reuters investigation revealed internal Meta documents suggesting that up to 10% of the company’s projected 2024 revenue, roughly US $16 billion, was linked to ads related to scams or prohibited goods. The same algorithms that promoted legitimate sellers were also monetising fraudulent campaigns. This showed how platform design can amplify deception when integrity controls are not embedded from the start.

    The Cost of Fraud: Why Businesses and Banks Care

    For consumers, marketplace scams mean lost money. For the financial sector, they mean chargebacks, regulatory exposure, and reputational damage. When fraudulent sellers disappear, banks and card networks often absorb refund costs and operational losses.

    Global e-commerce fraud losses are projected to increase from US $44 billion in 2024 to US $107 billion by 2029.

    This represents an increase of around 141%, according to Juniper Research. A separate TransUnion study found that companies worldwide lose an average of 7.7% of annual revenue to fraud-related costs.

    Regulatory frameworks are reinforcing accountability:

    • Singapore’s Scam Liability Framework (2024) requires strict real-time controls and full reimbursement where banks fail to protect customers.
    • The UK Payment Systems Regulator (PSR) introduced mandatory reimbursement for authorised push payment (APP) scams in 2025.
    • The European Union’s Payment Services Regulation (PSR2) and the forthcoming AI Act strengthen fraud prevention and transparency requirements for platforms and payment providers.

    Collectively, these frameworks shift the burden from voluntary security measures to enforceable obligations. Fraud prevention is now being positioned as a financial-crime compliance priority.

    Following the Money: Turning AML Discipline on Scams

    Every scam must move money. This gives banks a unique vantage point. The same analytical discipline used in AML investigations can expose the structures behind marketplace scams.

    Banks and payment providers use:

    • Transaction pattern analysis to identify clusters of small, fast withdrawals typical of cashout networks.
    • Link analysis to map shared IP addresses, devices, and beneficiary accounts across multiple seller profiles.
    • Graph analytics to visualize connected fraud rings spanning platforms or borders.

    When several seller accounts route payments to the same endpoint, or when refund flows repeatedly converge on identical processors, these systems flag the anomaly. The insight is simple here – money leaves digital footprints long after a fake storefront disappears.

    Cross-border data sharing and federated learning allow banks to trace typologies across jurisdictions without exposing private data. This capability is essential because fraud networks operate globally while regulation still largely remain national.

    AI to the Rescue: Intelligent, Adaptive Defences

    Fraudsters are increasingly weaponizing AI through deepfake voices, synthetic identities, and automated chat scripts to elevate the sophistication of marketplace scams. Financial institutions are responding by embedding AI across fraud systems to identify anomalies in real time and learn continuously.

    Key applications include:

    • Real-time anomaly detection: Scans behaviour and transaction data continuously to identify unusual patterns within milliseconds.
    • Predictive risk scoring: Evaluates every payment, login, or listing by assigning dynamic risk probabilities.
    • Evidence Analysis: Document and content analysis that flags recycled images, forged seller documents, repeated scam scripts, and counterfeit invoices tied to fraudulent merchants.
    • Identity Screening: Uses facial matching, liveness checks, and document validation to confirm seller authenticity.

    Federated learning: Enables banks to share fraud insights securely without exposing customer data.

    In 2025, a SWIFT pilot involving 13 international banks showed that federated learning combined with privacy-enhancing technologies doubled real-time detection effectiveness.

    These models learned collectively while keeping sensitive information protected. Mastercard has reported similar advances, noting faster detection of compromised cards and greater ability to intercept fraudulent transactions before authorisation.

    The message is clear. AI has become both the weapon and the shield. Institutions that do not modernise will fall behind the curve.

    Layered Defences and Collective Vigilance

    No single tool can solve fraud. Leading institutions now combine technology, human judgment, and ecosystem collaboration to build layered resilience.

    • Multifactor authentication and transaction controls prevent account takeovers and rapid-fire payouts.
    • Real-time monitoring and customer kill switches allow rapid containment when fraud is suspected.
    • Consumer-facing warnings have reduced scam success rates by prompting users before they complete risky transfers.
    • Industry consortia such as the Global Anti-Scam Alliance are building shared intelligence networks that complement federated learning models.
    • Regulatory frameworks (EU’s forthcoming AI Act) require platforms to disclose AI-generated content, which reduces the spread of deepfake scam advertising.

    These measures represent a whole-of-network approach where banks, fintechs, marketplaces, and regulators collaborate to strengthen digital trust.

    Conclusion: Trust Is the Currency of Digital Commerce

    Digital marketplace scams represent the financial crime frontier of the decade, where cyber deception meets payment infrastructure. The response requires advanced analytics, AI

    Banks can dismantle scam networks by tracing the money flows behind digital storefronts. AI deployed across detection layers positions them ahead of fast-changing typologies. While, collaboration with regulators and technology firms then closes the systemic gaps and loopholes that fraud networks exploit.

    The lesson from the Meta ad-scam revelations is clear. When deception becomes profitable, trust becomes optional. Financial institutions now play a central role in safeguarding the digital marketplace, and fraud prevention must reflect that responsibility.

    Trust is the new currency of digital commerce. Integrity is the regulator that protects it.

  • Less Noise, More Focus: How FinCEN is quietly rewiring the AML narrative

    Less Noise, More Focus: How FinCEN is quietly rewiring the AML narrative

    Introduction

    Recently, FinCEN released two developments that deserve close attention: the October 2025 SAR FAQs and a proposed Cost of Compliance Survey for NBFIs. Read together, these signals point to a shift away from measuring AML effectiveness through volume and accelerating toward evaluating quality and intelligence value of what is submitted.

    This is a significant reframing. The intent is not to reduce vigilance, but to challenge the long-standing assumption that more SARs automatically reflects stronger control and more spend implies deeper compliance entrenchment.

    The question is whether this shift will give institutions enough regulatory confidence to reduce defensive filing and instead base filing decisions on contextual suspicion and risk evidence.

    What the SAR FAQs clarify

    FinCEN is drawing a subtle boundary between suspicious behaviour and alert thresholds. The FAQ clarifies that –

    • Transactions near the US $10,000 currency threshold do not, by themselves, automatically require a SAR. A reason to suspect or suspicion remains the key trigger.
    • A separate account review is not obligatory post-SAR, unless the institution’s risk analysis supports it.
    • Institutions are not mandated to document every decision not to file a SAR, beyond alignment with risk-based internal controls.

    This is a direct encouragement to reduce mechanical alerting / reporting without weakening coverage integrity and move towards intelligence driven filings.

    The Proposed Compliance Cost Survey

    FinCEN has proposed a Cost of Compliance Survey and is seeking comments before implementation. This survey indicates their intent to build evidence before recalibrating the compliance burden. The survey targets casinos, money services businesses (MSBs), dealers in precious metals and stones, credit card operators and loan and finance companies because these segments carry high regulatory overhead but often may not produce proportional intelligence value.

    Structural changes cannot be justified based on industry sentiment or fatigue but require proof that the current architecture is not positioned to generate intelligence.

    This survey is aiming to distinguish where compliance effort translates into useful insight for enforcement versus where it simply creates operational volume.

    • Which activities generate genuine investigative value?
    • Which activities have high workload with low-intelligence outcomes?

    Shift in Regulatory Posture

    Read together with the SAR FAQs, this indicates a meaningful shift in supervisory posture.

    • From quantity to quality: Active dissuasion of reflexive filings triggered solely by thresholds or as simply a defensive practice. The directive seeks to question whether the cost of monitoring & filing is justified by results. Reduction in SAR output will only work if the coverage is not compromised.
    • From burden to calibration: The Survey acknowledges that AML/CFT compliance imposes real costs and that regulatory design should reflect proportionality.
    • From checklist to intelligence: The emphasis is shifting toward genuine risk-based programs driven by intelligent monitoring and meaningful results rather than sheer volume. This means that firms will have to implement stronger and comprehensive controls to defend their non-filing decisions.

    Some parts of the AML stack may be over engineered relative to the intelligence they produce. If the survey results confirm this, FinCEN will have the evidence to rebalance the compliance burden without being accused of weakening their stance against money laundering and terrorism financing.

    Our view: Where does this direction lead?

    If regulators start framing effectiveness in terms of signal value rather than output, firms will be expected to justify why their control design looks the way it does. Supervisors will not only look at how many alerts or SARs are generated, but whether the architecture that created them is proportionate, risk anchored and defensible.

    That requires some structural shifts:

    Customer 360 needs to become real infrastructure instead of a conceptual diagram on the slide. Entity resolution, unified data lakes, consistent identifiers and relationship mapping have to be real engines that support detection, not just a reference point. Until analysts see behavioural patterns, network context and historical context in one place, coverage will remain shallow and decisions will continue to default to defensive filing.

    Federated learning needs to progress to ecosystem scale. This does not require firms to pool raw data. It requires a pattern / signal exchange layer that allows multiple institutions to strengthen typology understanding and accelerate detection maturity without breaching privacy.

    It also forces a shift internally. Most institutions still do not have effective horizontal signal sharing across their own product, fraud, AML, cyber security and customer teams. If internal departments cannot share context consistently, external signal exchange will not produce an uplift.

    Given the pace of typology evolution, federated learning models will become necessary if institutions want sustainable accuracy.

    Feedback driven SAR programs are the need of the hour for effective recalibration. Today SARs exit the institution with no structured utilisation signal being returned. Without feedback, firms cannot measure the quality of their output and in such scenarios, quantity becomes the comfort metric. Even basic outcome metadata would allow firms to tune thresholds, recalibrate models and prioritise investigations based on what actually matters.

    The FCA and UK-FIU have demonstrated that structured feedback can be distributed in sanitised formats through information sharing, thematic insights and standardised communication without revealing sensitive investigation detail. A similar FinCEN version of that would significantly increase the value of industry effort.

    Model driven Analytics and AI need to move beyond threshold tuning and rule stacking. With recent developments, there is increased expectation for models to be explainable, grounded in evidence and aligned to measurable signal improvement rather than generic accuracy.

    Analyst skill sets will also need to shift toward structured reasoning, feature literacy and narrative building based on pattern logic. These changes focus on improving control quality so that effort is applied where it produces intelligent signals rather than volume.

    Conclusion

    The real value shift is not reviewing / filing less. It is moving analyst time from first level alert dispositioning into investigation work that actually produces intelligence. Better data, privacy safe collaborative learning and feedback loops are the practical enablers.

    Lower noise will demand stronger defence of non-filing decisions because scrutiny will shift to the quality of rationale rather than the comfort of large numbers. Institutions that rebuild their data foundations, participate in privacy-safe shared learning and advocate for structured feedback loops will be aligned with this new supervisory trajectory.

    Institutions that cling to volume as the primary indicator of performance risk remaining trapped inside alert noise.

  • From Resistance to Readiness: Shaping AI-Confident Workforces

    From Resistance to Readiness: Shaping AI-Confident Workforces

    Artificial Intelligence has moved from being a buzzword in boardrooms to a daily reality in workplaces, from streamlining operations and assisting with customer service to powering creative brainstorming. As generative and agentic AI integrate into workflows, the success of AI doesn’t hinge on having the most advanced model – it depends on people. Without readiness, even the slickest of tech can fall flat. The World Economic Forum highlights that while AI could create as many as 170 million jobs by 2030, around 92 million may be displaced in the same period. These shifts show that building AI-confident workforces isn’t just about technology – it’s a human capability and cultural priority essential for navigating both opportunity and disruption.

    The Human Side of AI Adoption

    AI is already at scale. IBM’s Global AI Adoption Index 2023 reports that 42% of enterprises have implemented AI, and another 40% are experimenting. Yet many employees still approach AI with hesitation. An EY study found that 71% of U.S. employees worry about AI, nearly half reporting increased concern over the past year. Three-quarters fear job loss, and 65% doubt their current roles will survive. These concerns are widespread and cannot be ignored.

    Resistance stems from uncertainty and overwhelm – employees question whether AI might make their roles redundant, if they can master unfamiliar tools, or whether using AI will be seen as taking shortcuts. This reflects not just skill gaps, but a lack of confidence and cultural readiness. IBM’s AI Readiness Index shows less than half of companies feel prepared for widescale integration. Organisations ignoring this emotional layer risk stalled adoption and derailed transformation.

    Readiness is not about buying software licenses; it’s about building behavioural and cultural foundations that help employees feel capable and safe to use AI. With AI advancing rapidly – 44% of core skills expected to be disrupted within five years (WEF) – organisations must turn resistance into readiness, shifting the focus from “Can we implement AI?” to “Can our people embrace it?” By fostering curiosity, resilience, and behavioural competencies, employees to grow alongside AI, boosting adoption, and creating agile, innovative, and future ready workforce.

    Mindset Shift: From Resistance to Innovation

    Shaping an AI-confident workforce requires a deliberate mindset shift. Employees must be geared towards perceiving AI as an enabler, and not as a competitor. Storytelling plays a big role here, sharing examples of how AI has solved customer pain points, reduced tedious tasks, or unlocked creative potential. When employees experience tangible wins, their resistance gives way to curiosity.

    This cultural shift has been particularly visible in organisations like HCLTech, where large-scale reskilling efforts have been undertaken, with the premise that “AI is being introduced as a co-pilot to augment human capabilities, not replace them” This lays emphasis on upskilling employees to take on higher-value tasks. The framing of AI as a colleague at the workplace, rather than a rival helps employees embrace the technology more readily.

    Embedding Social & Experiential Learning

    Traditional training – static modules, one-off workshops, or lengthy e-learning courses – focuses on information transfer but rarely supports habit-building or real-world confidence. That’s why many employees end up tuning out. A study on Microsoft 365 Copilot found employees often skipped formal onboarding videos, preferring hands-on use and peer discussions. This highlights a broader truth: people build confidence with AI not by passively consuming information, but by experimenting, sharing insights, and reflecting together.

    Hands-on experience with AI, especially its limitations, fosters realistic expectations and trust, particularly when supported by peer networks and champions. Organisations that translate these insights into governance structures achieve more sustainable adoption. AI readiness evolves through cycles of individual understanding, social learning, and organisational adaptation. These insights suggest that organizations should approach AI adoption not as a one-time implementation but as an ongoing strategic learning process that balances innovation with practical constraints.

    For organisations, this means shifting from one-off training modules to a more dynamic approach: creating opportunities for collaborative experimentation, peer-to-peer learning, and coaching. When employees can practice, question, and learn from each other, AI adoption shifts from a top-down mandate to a shared journey of growth, making technology both accessible and meaningful.

    Building the Core Competencies

    So, what does it take to nurture an AI-confident workforce? The answer lies less in technical skills and more in behavioural competencies that prepare employees to work in dynamic, uncertain environments.

    Article content
    • Embracing Ambiguity and Change AI is evolving faster than any traditional business process. Employees who can handle ambiguity – who don’t freeze when outcomes are uncertain – are more likely to adapt successfully. When DHL introduced AI-enabled voicebots to handle customer instructions in Germany, employees who were open to change engaged with the technology as an assistant, while those resistant to ambiguity initially viewed it as an intrusion. Over time, the organisation supported the transition by framing AI as a tool to free up capacity rather than replace jobs.
    • Adaptability and Resilience Adaptability is the willingness to pivot, and resilience is the ability to bounce back after disruption. Together, they form the backbone of AI readiness. At Goldman Sachs, more than 10,000 employees began using the firm’s in-house AI assistant to streamline research, coding, and client communication. Rather than resisting, teams adapted quickly, experimenting with how AI could ease daily pressures while still validating outputs with their expertise. This balance of flexibility and discipline illustrates how adaptability and resilience help employees not just absorb new tools, but sustain performance during change.
    • Learning Agility Learning agility is the readiness to learn, unlearn, and relearn continuously. In environments where AI tools change every few months, this is essential. Microsoft’s developer study showed that over 75% of developers now use AI assistants regularly, and nearly 90% report feeling more productive. What drove adoption wasn’t formal training videos but the willingness to experiment, test, and learn in real time. Organisations that encourage small-scale experimentation and peer learning see faster adoption than those that rely on traditional classroom training alone.
    • Digital Confidence and Critical Thinking Confidence in using technology is about trusting oneself to explore, troubleshoot, and evaluate outputs critically. AI is powerful, but not always accurate. Employees with digital confidence and strong critical thinking skills are better at spotting errors, questioning biases, and deciding when human judgement must override machine recommendations. ANZ Bank conducted a six-week experiment with GitHub Copilot involving around 100 engineers, and the results showed a significant productivity increase-tasks were completed 42.36% faster by engineers using Copilot compared to those who did not. Alongside productivity, their ability to critically evaluate AI-generated code ensured quality didn’t suffer.
    • Creativity, Innovation and Growth Mindset Paradoxically, AI doesn’t diminish the importance of creativity – it amplifies it. With AI handling repetitive tasks, employees are freer to experiment and innovate. A growth mindset – the belief that skills can be developed through effort, helps employees view AI not as a threat but as an opportunity to push the boundaries of what’s possible. PwC Australia has shifted its recruitment criteria toward these human-centred qualities, such as curiosity, collaboration, and ethical judgment over traditional technical checklists. Their reasoning is simple: in a world where AI evolves daily, the best long-term asset is human adaptability, creativity and emotional intelligence.

    Collaborating with AI: Shaping New Working Models

    For AI to feel more approachable, it must weave into daily workflows in simple, meaningful ways – summarizing long reports, drafting emails, or assisting with research.

    Deloitte UK’s in-house AI chatbot, PairD, illustrates this: audit staff interacting with chatbot monthly rose from 25% to nearly 75% in a year, generating over 1.1 million prompts between April 2024 and February 2025. Employees use it not just for basic questions but to develop complex prompts, assisting with document summaries, coding, and data analysis. The focus is on freeing up time for deeper analytical work showing that AI’s value lies in hands-on, embedded collaboration.

    Agentic AI takes this further by acting semi-autonomously. Unlike reactive tools, it anticipates, flags errors, proposes next steps, and can carry out actions independently, like rescheduling shifts or managing interview schedule.

    McKinsey points out how agentic AI is reshaping talent workflows. Instead of waiting for recruiters to prompt each step, these systems can scan resumes, shortlist candidates, and even line up interview schedules on their own. What comes back to the recruiter isn’t raw data, but a refined set of options to review. This frees people to spend their energy where it matters most – making judgements, building connections, and applying empathy.

    Effective worker-AI coexistence depends on cultivating “agentic behaviours”: intentionality, proactivity, adaptability and collaboration. Embedding these behaviours ensures AI aligns with human values and business goals, turning technology from a tool into a true collaborator that amplifies productivity, innovation, and human judgment.

    Real-World Rewards of Building AI-Confident Workforces

    When employees embrace AI confidently, Worker-AI coexistence turns into more than faster work – it creates smarter, bolder, and more adaptable teams. The real gains appear in innovation, resilience, and a workforce ready for the future.

    Article content
    • Productivity gains that go beyond efficiency At Microsoft, developers using GitHub Copilot reported completing tasks up to 55% faster, with some workflows showing 90% higher productivity. Beyond speed, employees felt empowered to tackle more creative and complex work, reflecting behaviours like curiosity, learning agility, and confidence in experimenting with AI. This shows how AI-ready behaviours amplify both efficiency and quality, not just output volume.
    • A stronger culture of innovation and adaptability At DHL, AI is embedded into logistics planning and warehouse operations, but the real transformation comes from employees. Staff trained to engage confidently with AI-driven tools are not only executing tasks more effectively they actively suggest improvements, experiment with new approaches and share insights on operational efficiencies. This behaviour reflects adaptability, curiosity and proactive problem-solving. As a result, the organisation benefits from a culture where innovation emerges bottom-up, employees feel empowered to influence processes, and adaptability becomes a shared competency, not just a technology-driven outcome.
    • Talent retention through future-proofing careers Employees increasingly look for employers who invest in reskilling and help them stay relevant. Business Insider highlighted that workers are more likely to stay loyal to companies that actively prepare them for an AI-enabled future. By cultivating behaviours like continuous learning, openness to new tools, and self-driven development, organisations signal commitment to people, boosting loyalty and trust.
    • Competitive edge through agility. At ANZ Bank, AI was embedded in fraud detection and customer support, but real advantage came from employees upskilled to understand, trust, and act on AI insights. By demonstrating behaviours like adaptability, critical thinking, and collaboration, teams responded faster to customer needs and mitigated risks effectively turning technology adoption into a tangible strategic advantage.
    • Risk Mitigation and Ethical Leadership AI-confident employees are trained to spot biases, misuse, and ethical risks. For example, Bank of America invests in programmes that teach staff responsible AI use in financial services. Employee behaviours like accountability, vigilance, and ethical reasoning ensure that AI is applied responsibly, building trust with customers, regulators, and the market.
    • Stronger organisational resilience During the pandemic, companies with AI-ready talent adapted faster. Unilever, for instance, leveraged AI-driven workforce planning to redeploy staff where demand shifted most. Employees trained to work with AI insights demonstrating adaptability, problem-solving, and proactive decision-making enabled the company to pivot quickly and maintain operational continuity. AI confidence here is as much about behavioural readiness as technological capability.

    Ethics and Trust: The Compass for AI Collaboration

    Ethics and trust are foundational for AI-readiness and effective Worker-AI coexistence. Organisations must foster behaviours prioritising fairness, transparency and accountability, not just implement technology. The Commonwealth Bank of Australia’s experience illustrates this: plans to cut 45 customer service jobs using AI chatbots were reversed after rising call volumes and union pressure, showing that efficiency cannot override responsibility toward employees and customers. Building these behaviours into everyday workflows is essential for sustainable adoption.

    Key considerations for ethical AI adoption:

    • Embed ethics into behaviour – Implement principles like fairness, privacy, explainability, and security from the start.
    • Build transparency tools – Explain why AI makes suggestions to foster safety and commitment.
    • Educate employees – Cover legal and ethical risks, including prompt handling and data privacy.
    • Proceed gradually – Implement AI thoughtfully rather than rushing replacement.

    IBM demonstrates the impact: by training employees in responsible AI use, bias detection, and explainability, the company fosters trust internally and externally, making AI adoption more sustainable and aligned with organisational values while protecting workforce confidence and brand reputation.

    Conclusion

    AI adoption succeeds when employees embrace it confidently, guided by behavioural competencies like curiosity, collaboration, ethical awareness, and digital confidence. Framing AI as a partner and embedding it into daily workflows fosters trust, experimentation, and proactive problem-solving. Worker-AI coexistence then becomes a driver of innovation, resilience, and sustainable advantage. Organisations that invest in people as much as technology unlock not just efficiency, but a future-ready workforce empowered to lead in an AI-driven world.

    References

  • Behavioural Analytics: The Next Frontier of Workforce Intelligence

    Behavioural Analytics: The Next Frontier of Workforce Intelligence

    Work today isn’t steady or predictable. Roles evolve, skills expire faster, and teams form and reform around shifting priorities. Technology keeps rewriting how we connect, while employees expect more relevance, flexibility, and purpose from their organisations. In such a fluid environment, the real differentiator isn’t just strategy or tools, but whether a company can truly keep pace with how its people work and grow.

    The way organisations measure people has come a long way. It started with counting heads and tracking costs, then moved into analysing skills, engagement, and HR processes. Each step gave leaders sharper insights, but the focus had mostly been on outcomes. Did employees meet targets? Did they complete the training? What do performance reviews say? What is the attrition rate? These are valuable, sure. But they’re lagging indicators that tell us what happened, not why, when, or how. The real shift begins when you start asking not just what the numbers show, but how people got there. Did someone overwork to hit a goal, collaborate effectively, or lean on old habits instead of learning?

    Hitting a target is the visible part of performance, but the drivers sit beneath the surface. The way people prioritise, solve problems, share knowledge and lean on each other is what shapes the end result. Once you see those patterns, you can shape them too. That’s where behavioural analytics enters the picture – uncovering real-time patterns in engagement, adaptability, collaboration, communication, leadership, and motivation. By paying attention to these signals early, leaders can move from reactive to proactive, using these insights as a springboard for action and growth. That’s potential.

    From Manpower to Behaviour

    HR analytics has been steadily growing, but most organisations are still at the early stages. The roadmap to analytics started with focus on the number and headcount, evolved to emphasising on engagement and performance and is now slowly transitioning to Behavioural Analytics which is the new order of workforce intelligence.

    • Manpower Analytics – includes workforce basics focusing on numbers like headcount, attrition, and cost-to-hire. It’s quantitative and operational, ensuring the right number of people at the right place and cost. According to ISG’s 2023 HR Tech Survey, only 36% of companies use predictive analytics in HR, and 43% say they’ve built a data-driven HR culture. Most remain stuck in descriptive reporting.
    • People Analytics – From manpower analytics, it matures into going beyond headcount, to analyse talent, HR processes, and connects with impact on business results, such as quality of hire, engagement, learning effectiveness, succession, and diversity. This is where companies begin predicting rather than just reporting. Deloitte found 70% of organisations were already using people analytics by 2022, with adoption expected to exceed 80% by 2025.
    • Behavioural Analytics – Today there is a need to take a deeper look to understand the human layer of work, how employees act, interact, and make decisions. It’s more qualitative, linking behaviour to competencies, culture, and performance. This data often comes from various sources which includes but is not limited to, collaboration tools, surveys, and assessments. Behaviour Analytics and its role in shaping organisation culture is reflected in an example; where a U.S bank adopted a platform called ‘Humanyze’, applied organizational network analysis to understand collaboration dynamics. They found that teams who shared more informal interactions, like overlapping lunch breaks, performed significantly better. By restructuring schedules to encourage this, the bank achieved a 27× return on investment, reduced turnover by 28%, and improved call resolution speed by 23%.

    These are small yet significant findings that behavioural analytics can bring to the forefront, bearing a significant impact on key business metrics in a positive manner. The maturity curve is less a steady climb and more a leap. Most organisations are comfortable counting, many are starting to predict, but only a few are bold enough to decode how people truly behave and connect.

    Dimensions of Employee Behavioural Analytics

    As HR moves from transactional to transformational, behavioural analytics steps in to go beyond basic metrics and answer questions such as:

    • How are time and effort being invested?
    • How are people interacting and collaborating?
    • How are employees pursuing development and feedback?
    • How are they contributing to shared intelligence?
    • How do employees feel and sustain performance?
    • How do leaders inspire, align, and govern responsibly?

    These questions anchor six key dimensions of behavioural analytics that bring the human side of organisational performance into focus:

    • Flow of Work: Captures how employees allocate energy, balance demands, adopt new ways of working, and uphold ethical behaviours – Time usage, adaptability, workload rhythms, ethical compliance
    • Web of Connections: Reveals the density, diversity, and responsiveness of professional networks – Communication quality, responsiveness, team cohesion, network health
    • Growth Mindset Signals: Shows proactive behaviours around learning, adapting, and seeking input – Learning behaviours, adaptability, feedback loops, change adoption
    • Knowledge Capital: Focuses on contribution, documentation, and thought leadership – Knowledge sharing, visibility, innovation contribution
    • Wellbeing & Sentiment Pulse: Adds the emotional and psychological layer to behavioural data – Emotional state, engagement, recognition, resilience
    • Leadership & Purpose Dynamics: Captures the clarity of purpose leaders provide, the ethical tone they set, and how effectively they align teams to shared goals and long-term vision – Leadership effectiveness, influence, purpose alignment, trust
    Article content
    Six Dimensional Behavioural Analytics Maturity Framework by Karmine

    The Organisational and Employee Value of Behavioural Analytics

    Benefits for Organisations

    • Early Warning Signals for Productivity and Engagement: Instead of waiting for quarterly engagement surveys, organisations can detect issues in real time. Microsoft saw a 16% rise in late-night meetings, 50+ messages sent outside hours, and 20% of staff working weekends. These patterns flagged risks of burnout and workload imbalance, prompting leadership to set clearer boundaries and prevent productivity collapse.
    • Strengthened Culture and Resilience During Change: Helps organisations spot morale dips and act quickly to protect culture. During an unsolicited takeover attempt, Unilever used automated listening tools and sentiment analysis to track employee engagement and internal communication. This helped detect early signs of falling morale and launch support programs. By acting swiftly, they maintained productivity and workforce resilience. Transparent communication and a strong culture focus enabled Unilever to withstand the takeover pressures and protect employee trust.
    • Data-Driven Management and Strategies: Instead of relying on assumptions, companies can test which behaviours drive performance and coach managers accordingly. Google’s Project Oxygen proved that effective managers aren’t born, they follow specific, observable behaviours. By analysing more than 10,000 data points, Google identified ten observable & coachable behaviours that reshaped manager training, recognition systems, and even promotion criteria. Within a year, 75% of underperforming managers had improved significantly, leading to stronger team performance, higher engagement, and measurable productivity gains.

    Benefits for Employees

    • Stronger Voice and Sense of Belonging: Empowers employees by ensuring their experiences are heard and acted upon. Mercer launched “Your Voice Matters” initiative after discovering that their staff felt disconnected at work, encouraging encouraged open communication and feedback through regular surveys and focus groups. This raised engagement from 50% to 75% in two years. Employees felt genuinely listened to, which boosted motivation, reduced turnover, built trust and increasing overall productivity.
    • Smarter Workload Distribution Through Real Insights: Uncovers patterns of overwork or underutilisation, enabling leaders to spread tasks more evenly across teams. Microsoft’s after-hours analysis helped leaders set clearer boundaries and expectations, ensuring teams stayed productive without burning out.
    • Fairer Development and Growth: When leadership behaviours and performance drivers are grounded in real data, employees benefit from more transparent and fair growth pathways. Google’s Project Oxygen gave employees tangible benefits by vague ideals of “good leadership” to clear coachable actions. Instead of hoping their manager was supportive, employees could expect consistent practices – like regular check-ins, meaningful feedback, and visible support for career growth. This improved trust in leadership and created fairer career paths.

    Simply put, behavioural analytics empowers organizations get sharper decision-making, and employees gain a healthier, more supportive workplace.

    AI-Powered Employee Behavioural Analytics

    AI-powered behavioural analytics is transforming how organisations understand and support their workforce by moving beyond quarterly reviews and annual surveys to real-time insights drawn from collaboration tools, communication channels, and learning systems. Imagine a system that detects a 30% drop in team engagement over two weeks or flags when a top performer’s response time slows by half. AI interprets tone, collaboration patterns, and learning engagement to provide context-rich alerts that allow leaders to act quickly and strategically. The benefits are clear: speed, with instant notifications instead of delayed feedback; context, with cues that highlight root causes rather than raw data; and focus, with precise signals on risks like engagement dips or collaboration breakdowns. As companies adopt these tools, they create more adaptive and personalised workplaces where employees gain tailored career recommendations and learning paths while HR benefits from ethical, explainable analytics that build trust.

    Microsoft 365 Copilot is embedded in Teams and Outlook to summarise meetings, detect communication overload, and suggest more efficient collaboration patterns. Similarly, Workday’s AI capabilities analyse sentiment and skills data to provide managers with ethical, explainable insights for talent planning.

    Why Behavioural Analytics in HR Is Still Underleveraged

    Behavioural analytics has long been used for understanding consumer behaviour. Retail giants, streaming services, digital platforms have refined how they capture customer clicks, preferences, choices, and loyalty. All of this fuel personalisation, retention, and revenue growth. But when it comes to human capital, that kind of behavioural insight remains under-leveraged with the following key challenges holding back adoption:

    • Privacy, Ethics, and Trust Employees expect far higher privacy and dignity at work than consumers do in markets. Tracking collaboration, keystrokes, or sentiment can easily cross ethical lines without clear consent or transparency. Unlike consumers who trade data for discounts or personalisation, employees value autonomy, fairness, and legal protection.
    • Fragmented and Inconsistent Data Employee data is scattered across emails, chat logs, meetings, surveys, and HR systems. Only 40% of HR professionals say their organization is ‘good or very good’ at analysing people data, and just 48% rate their data generation capabilities highly. This fragmentation makes insights unreliable and scaling difficult.
    • Capability and readiness gaps Even when the will is there, most companies lack the systems and skills needed for advanced behavioural analytics compared to digital customer-facing functions. Companies need mature analytics capabilities, reliable data, and sophisticated technology infrastructure. Many are still building maturity in workforce and people analytics before they can dive deeper.
    • Unclear ROI compared with consumer use cases Marketing analytics delivers clear returns in sales and conversion, but HR outcomes – engagement, collaboration, or well-being – are harder to link directly to financial impact. This makes budget holders hesitant to invest, even though the long-term value is significant.

    Until such issues are addressed, behavioural analytics will remain underused in HR, despite its clear potential to strengthen both employee growth and organisational performance.

    Building the Foundation for Behavioural Analytics

    Behavioural analytics sits at the advanced end of the HR analytics maturity curve. Most organisations begin with descriptive reports, move into diagnostic dashboards, and then step into predictive & prescriptive models. Behavioural analytics relies on multiple layers of technology, data and culture being in place.

    Article content
    Laying the Foundation for Behvioural Analytics

    Ethical Considerations: Watchful but Respectful

    Here’s where a bit of nuance matters. Behavioural analytics only works if emSample metrics for 6-dimension behavioural analytics pyramid across maturity levelsSample metrics for 6-dimension behavioural analytics pyramid across maturity levelsployees trust it. Done openly, it strengthens collaboration, development, and opportunity. Done poorly, it risks undermining culture. The goal should always be support, not surveillance. Here are ethical considerations that companies should apply:

    • Transparency: Clearly explain what data is collected and why. Position it as development-focused, not surveillance
    • Privacy: Use aggregate or anonymised data where possible. If individual behaviour is analysed, do so with consent and for growth, not punishment.
    • Opt-In Choices: Make participation voluntary where you can, with clear benefits such as tailored support.
    • Empathy-Driven Use: Interpret behaviour data with context – late responses may reflect deep work or personal matters, not disengagement. Data should start conversation, not drive judgement.
    • Clear Boundaries: Define what will not be measured (e.g., private chats, personal devices) to build trust.
    • Shared Value: Show how insights help employees grow in their careers and learning, not just how they benefit the organisation.
    • Human Oversight: Algorithms can flag patterns, but people should interpret and act with care
    • Feedback Loops: Give employees a voice to question or clarify how their data is read, making it a two-way process.
    • Cultural Sensitivity: Behaviours vary by culture and role; avoid one-size-fits-all interpretations.
    • Positive Reinforcement: Use analytics to encourage constructive behaviours, not just detect risks.

    Linking Behavioural Analytics to Learning & Development

    Behavioural analytics provides a data-driven foundation for modern L&D. By measuring signals such as collaboration patterns, feedback-seeking, or adaptability to change, organisations can identify the precise learning needs that hold teams back. Instead of rolling out generic programs, analytics enables the sharper and personalized learning journeys across technical skills, soft skills, leadership development, or competency training.

    This enables employees to engage with learning that feels relevant to their roles, while leaders can track measurable progress through the same behavioural indicators that highlighted the need. This creates a closed loop between insight and action – analytics identifies gaps, L&D addresses them, and follow-up analytics measures the impact. Done well, this approach not only builds stronger skills but also nurtures a culture of continuous learning, adaptability, and high performance.

    Conclusion

    Behavioural analytics is moving fast to becoming a core part of how organisations understand and support their people by using real behavioural signals to shape smarter learning, more relevant development, and stronger team performance. The real win is that it helps HR step out of the back office and drive resilience, adaptability, and culture at scale. And with AI in the mix, the future goes further than just analysing behaviour, by simulating outcomes, personalising growth, and creating workplaces that continuously learn and improve. It is not just a tool, it is the next frontier in data-driven talent intelligence that provides strategic, corporate-focused insights

    References

    1. Deloitte. (2023). Global Human Capital Trends 2023 Report. Deloitte Insights.
    2. Deloitte. (2025). Global Human Capital Trends 2025 Report. Deloitte Insights.
    3. ISG. (2023). Survey on Industry Trends in HR Technology and Service Delivery 2023. ISG Research.
    4. Bersin, J. (2018). People Analytics Maturity Model. Bersin by Deloitte
    5. Humanyze. (2023). Moving toward a people analytics world. No Jitter. https://www.nojitter.com/data-management/moving-toward-a-people-analytics-world
    6. Microsoft. (2023, March 16). Introducing Microsoft 365 Copilot: Your copilot for work. Microsoft Blog. https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
    7. Workday. (2023, September 27). Workday unveils new generative AI capabilities to amplify human performance at work. Workday Investor Relations. https://investor.workday.com/2023-09-27-Workday-Unveils-New-Generative-AI-Capabilities-to-Amplify-Human-Performance-at-Work
    8. Unilever. (2017). Annual Report and Accounts 2017. https://www.unilever.com/files/origin/6be0d0dbe8c5088374b7f3ff903ef4995a1a6a62.pdf
    9. George, W. W., & Migdal, A. (2017). Battle for the Soul of Capitalism: Unilever and the $143 Billion Takeover Bid. Harvard Business School Case 317-127.
    10. Google Re:Work. (n.d.). Managers – Identify what makes a great manager. Google Re:Work. https://rework.withgoogle.com/intl/en/guides/managers-identify-what-makes-a-great-manager
    11. Garvin, D. A. (2013, December). How Google sold its engineers on management. Harvard Business Review.
    12. Schneider, M. (2018, December 13). Analysis of 10,000 reports told Google to train new managers in 6 areas. Inc. https://www.inc.com/michael-schneider/analysis-10000-reports-told-google-to-train-new-managers-6-areas
    13. Mercer. (2022–2025). Your Voice Matters: Employee listening and engagement. Mercer Employee Experience Solutions. https://www.mercer.com/en-in/solutions/talent-and-rewards/employee-experience/employee-listening/
    14. HR.com. (2024). State of People Analytics 2023–2024 Research Report. HR.com.
    15. Insight222. (2024). People Analytics Trends Report 2024. Insight222
    16. MyHRFuture. (2023, May 10). Harnessing data for growth: The impact of people analytics. myHRfuture
    17. Davenport, T. H., Harris, J., & Shapiro, J. (2018, November). Better people analytics. Harvard Business Review
    18. Scribd. (2019). 9 HR Analytics Case Studies. Scribd. https://www.scribd.com/document/432107816/9-HR-Analytics-Case-Studies-1569541778
    19. Emerald. (2024). The power of peer recognition points: Does it work? Strategic HR Review, 24(1), 2–6. https://www.emerald.com/shr/article/24/1/2/1245460/The-power-of-peer-recognition-points-does-it
    20. SHRM. (2024). State of the Workplace Study 2023–2024. SHRM Research
    21. Amplitude. (2025, July 6). What Is Behavioral Analytics? Definition, Examples, & Tools. Amplitude Blog. https://amplitude.com/blog/behavioral-analytics-definition
  • Unleashing Internal Employee HEROs: The ROI of Positive Psychological Capital

    Unleashing Internal Employee HEROs: The ROI of Positive Psychological Capital

    In the world of constant uncertainty, skills are becoming obsolete at unprecedented rates, employees are getting burnt out, disengaged, or disconnected. Traditional resources like compensation, perks, or well-being programs are not enough. So how do organisations build a workforce that’s adaptive, engaged, and future-ready?

    A workforce that doesn’t just cope but thrives? The answer isn’t more skills or smarter systems, but stronger inner foundations. The key is to build Psychological Capital (PsyCap)  by empowering Internal Employee HEROs. For organisations, PsyCap is a behavioural asset that enhances how people think, feel, and act at work.

    Psychological Capital ‘HERO’ Model

    The HERO Model, conceptualised by Luthans, Youssef, and Avolio in 2007 serves as an extension of positive organisational behaviour, comprising of four key elements:

    Article content
    Key Elements of the HERO Model

    The HERO elements independently contribute to workplace effectiveness. Together, they multiply into a powerful psychological engine that fuels proactive behaviour and adaptive performance.

    Organisational Payoff of Building Psychological Capital

    Organisations that invest in building PsyCap look for more than just morale boosts, they aim to influence productivity, engagement, and performance, yielding strategic returns.

    • Direct Impact on Performance and Productivity: Higher PsyCap is significantly correlated with job performance and job satisfaction. This translates into employees who not only deliver more consistent results, but also take greater ownership of their work, adapt faster to change, and sustain high output even in challenging conditions. Organisations that invest in building PsyCap, have shown productivity increases of up to 20%.

    Google’s Employee Mindfulness Program includes tools such as guided meditation, apps and workshops, all integrated into its culture to boost employee well-being, focus, and resilience. The offering helps employees manage stress and improve emotional regulation. This positively impacts organisational performance by fostering a more engaged, adaptive, and productive workforce, reducing burnout, and supporting sustained innovation.

    • Enhanced Employee Engagement and Retention: Employees high in PsyCap tend to be more emotionally invested in their organisations, less likely to burn out, and more likely to stay and thrive, highlighting how inner psychological resources can stabilise employee retention under pressure. Organisations that invest in positive organisational behaviour, including PsyCap, have shown retention improvements of 25%.

    Salesforce’s “Ohana Culture” includes mental health and wellness programs, resilience-building workshops, and opportunities for employees to contribute to social impact work. This fosters a sense of purpose and hope among its workforce. Salesforce focuses on family, trust, and community, creating a supportive work environment, and emphasized high levels of job satisfaction among employees. Creating a positive work environment and strong sense of community contributed to low turnover rates, helping Salesforce retain top talent

    • Business Outcomes that Compound Over Time: Companies that cultivate PsyCap report improved customer satisfaction, innovation rates, and operational efficiency. It is proven that organisations implementing targeted PsyCap interventions saw performance improvements of 2-3% which, when applied to large workforces, represented millions of dollars in productivity gains.

    Microsoft’s leadership encourages a growth mindset that embraces learning from failure and continuous development. The approach enhances individual capabilities and drives team collaboration, creativity, fuelling innovation. Over time, these cumulative improvements lead to stronger business outcomes – higher productivity, sustained competitive advantage, and accelerated innovation – that compound, positioning Microsoft for long-term success in a fast-evolving technology landscape.

    • Resilience as a Strategic Risk Buffer: Resilient employees form the backbone of crisis-readiness. High PsyCap teams recover faster from setbacks, collaborate effectively under pressure, and are more likely to find creative solutions instead of defaulting to risk-aversion techniques. This behavioural agility reduces downtime and accelerates recovery from disruptions.

    IBM emphasizes building employee resilience and self-efficacy through wellness programs and leadership training focused on emotional intelligence and adaptability, equipping employees with tools to take ownership of their career growth and maintain optimism in the face of challenges. Leadership programs that enhance self-awareness further reinforce personal resilience, enabling leaders and teams to navigate uncertainty more effectively. This focus on resilience acts as a strategic risk buffer for IBM, reducing the impact of workplace stressors and disruptions while sustaining productivity and long-term organizational stability.

    • Culture and Reputation Dividend: Intentional modelling of PsyCap leads to reduced change resistance, shortening transformation timelines, influencing organisational culture. An optimistic, hopeful, and confident workforce not only drives results internally but also signals to customers, investors, and prospective hires that the company is forward-thinking and people-centric.

    Ben & Jerry’s commitment to building empathy and compassion in its workforce through values-based hiring and culture-building efforts aligned with its social and environmental mission. This strong culture enhances internal collaboration and morale and also boosts their reputation as a purpose-driven brand, creating a significant culture and reputation dividend that attracts customers, talent, and partners who share these values.

    Building the Foundations of a High-PsyCap Culture

    • Go Beyond Wellbeing by Embedding PsyCap into Organisational DNA: Most employee wellness programs today are reactive and step in only after burnout, attrition, or disengagement have already happened. PsyCap offers a proactive mindset shift that helps build the mental and emotional infrastructure needed for sustainable engagement, empowering individuals to become self-renewing assets who regulate stress, adapt quickly and maintain a solution-oriented mindset.

    How can this be applied:

    • Performance reviews can include focus on how employees demonstrated persistence in setbacks, optimism in uncertain conditions, or creative problem-solving under pressure.
    • Leaders must strive to consistently model these traits in their own conduct, publicly sharing how they navigate challenges to make them aspirational and normalised across the workforce.
    • Deconstructing into Observable Daily Habits: The key to making PsyCap truly impactful lies in consistency – small, observable micro-behaviours practiced daily – how conflict is resolved, how failure is treated, how listening happens. These micro-habits are easy to apply, stack onto existing routines, and create repeatable patterns that build long-term behavioural change.

    How can this be applied:

    • You don’t train efficacy – you train the micro-behaviours of efficacy.
    • Starting meetings with a clear plan, summarising and sharing learnings after completing a task, actively seeking peer input on work in progress, and volunteering for small stretch assignments that push skills beyond current comfort zones.
    • Equip Leaders and Managers as PsyCap Multipliers: Leaders and Managers are the primary translators of organisational intent into daily employee experience. By equipping them with targeted training on coaching conversations, cognitive reframing techniques, and resilience storytelling, companies turn them into catalysts for PsyCap development. Managers must model behaviours to signal their importance, and feedback should focus on behaviours, not just outcomes

    How can this be applied:

    • Managers can be taught how to help team members visualise success, break daunting challenges into manageable steps, and identify resources that increase their likelihood of success.
    • Regular manager roundtables or peer coaching circles can help them share what works, troubleshoot roadblocks, and stay aligned in reinforcing PsyCap behaviours.
    • Performance & Learning Systems that Reward PsyCap Behaviours: Integrating recognition for PsyCap behaviours into performance & learning systems means moving beyond measuring only outcomes to valuing the underlying mindsets and actions that drive sustainable success. This combined approach of incorporating competencies into performance and learning systems, organisations can emphasize on the importance of ‘how’ results are achieved and create a continuous loop of reinforcement and skill-building.

    How can this be applied:

    • By embedding markers for persistence, learning agility, solution-oriented thinking, and collaborative problem-solving into performance reviews, peer-feedback platforms and real-time recognition tools
    • Learning programs can be designed to develop these traits through workshops, simulations, and on-the-job projects, while performance and recognition systems validate and reward their application.
    • Managers can consistently highlight these traits in feedback discussions and link them to career progression, bonuses, or development opportunities.
    • Feedback Loops & Storytelling: Feedback loops and storytelling can be powerful levers for building PsyCap when they move beyond standard performance reviews to become an ongoing exchange of insights, recognition, and shared experiences. These stories, when told authentically and linked to the organisation’s values, make abstract competencies tangible and aspirational, showing peers how PsyCap works in practice.

    How can this be applied:

    • Organisations can intentionally capture real employee stories, instances where HERO helped navigate challenges, and share them through team huddles, internal newsletters, learning sessions, or digital platforms.
    • Timely, constructive feedback that reinforces desired behaviours and celebrates small wins.

    From HERO to Habit: Behavioural Competencies as the True Capital

    The true strength of positive PsyCap is exhibited through individual behavioural competencies. Self-awareness, emotional regulation, resilience, active listening, conflict handling, assertiveness are core capabilities that shape how people work, lead, and grow. Teams with higher PsyCap are more collaborative, creative, and resilient to change, leading to faster decision cycles and better problem-solving under pressure. Embedding specific behavioural competencies into job roles, leadership, and feedback systems can amplify PsyCap organically creating a scalable, culture-wide impact.

    Article content
    Source: WEF Future of Jobs Report 2025

    Behavioural competencies are now as vital as technical skills. The WEF Future of Jobs Report 2025 highlights critical human skills like analytical thinking, creative thinking, resilience, flexibility, agility, curiosity, lifelong learning, leadership and social influence as among the fastest-growing capabilities needed through 2030. These competencies are strategic differentiators that, when rooted early in career, can turn potential into progress compounding into a long-term competitive advantage for both employees and organisations.

    Key Behavioural Competencies for an Evolving Workforce

    While the modern workplace demands a wide range of human capabilities, below are foundational competencies that drive meaningful performance and growth.

    • Self-Awareness – It is the ability to consciously recognise, understand, and reflect on your own thoughts, emotions, motives, values, and behaviours — and how they affect both yourself and others. Self-aware employees make better decisions, communicate more effectively, are more promotable and coachable. According to research by Tasha Eurich (2018), only 10–15% of people are truly self-aware, despite 95% thinking they are. When cultivated early self-awareness becomes the bedrock of personal and professional development.
    • Emotional Regulation: It focuses on the constructive management of emotions in real time. It is what allows a person to stay composed in conflict, navigate stress productively, and avoid reactive behaviour. This competency is crucial in high-pressure situations like appraisals, leadership roles, or navigating ambiguity. For leaders, it supports presence, patience, and clarity in crisis.
    • Resilience: Resilience is about bouncing back from setbacks and bouncing forward with learning. Resilience involves flexible problem-solving, reframing adversity, and regulating negative self-talk. Unlike ‘grit’ which sometimes can romanticise endurance, resilience includes flexibility, emotional agility and social support. According to a study in the Journal of Occupational and Environmental Medicine, individuals with higher resilience experienced 10 – 20% lower rates of absence, depression, and productivity loss, including in high stress environments compared to those with lower resilience. Teams with high resilience collective scores respond faster to disruption and require less emotional labour from managers during uncertainty.
    • Constructive Communication: Constructive communication is a combination of what we say, how we say it, and the way we interpret others’ words. It holds teams together, the bridges the gap between leadership and employees and is the catalyst for productivity and innovation. It fosters clarity, boosts morale, minimises misunderstandings – creating a sense of belonging and engagement, and driving sustained success. It focuses on how to ask questions, offer or receive feedback, resolve tensions without avoidance, speak with clarity and respect, actively listen with empathy.

    Behavioural Competencies as Catalysts for Career Milestones

    In every stage of the career, it’s the right mix of mindset and skill that drives progress. These competencies show how we turn individual strengths into collective success.

    Article content
    Impact on employees across layers in the organisation

    Conclusion

    In the rush to automate, upskill, and optimize, companies often overlook their most renewable resource: the human potential. PsyCap reminds us that ‘being more’ is often more powerful than ‘doing more’. The real differentiator isn’t just skill, it is behavioural fluency – the ability to regulate, adapt, empathise, and communicate across situations and stages – with which employees become self-renewing contributors to organisational growth.

    When organisations invest in HEROs from the beginning of employee’s career journey, the return isn’t just financial; it is cultural, human, and long lasting. With structured development around awareness, communication, and resilience, employers create a feedback loop of confidence, competence, and clarity. Think of it as compound interest, just as starting early to save yields exponential returns, starting to build behavioural agility early creates career-long ROI.

    It is time to treat behavioural competencies as the foundation of every successful, sustainable organisation and not as afterthoughts. With 59% of global workers needing reskilling by 2030 and employers planning significant investment in workforce transformation, there’s an opportunity to embed behavioural development from entry-level through the executive suite which will produce not just high performers – but Human Advantage: adaptable, engaged, collaborative, and future-ready.

    Sources

    1. Luthans, F., Youssef, C. M., & Avolio, B. J. (2007). Psychological Capital: Developing the Human Competitive Edge. Oxford University Press.
    2. Eurich, T. (2018). What Self-Awareness Really Is (and How to Cultivate It). Harvard Business Review.
    3. World Economic Forum. (2025). The Future of Jobs Report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025
    4. Walumbwa, F. O., Luthans, F., Avey, J. B., & Oke, A. (2009). Authentically leading groups: The mediating role of collective psychological capital. Journal of Organizational Behavior, 30(3), 377–396.
    5. Avey, J. B., Reichard, R. J., Luthans, F., & Mhatre, K. H. (2011). Meta-analysis of the impact of positive psychological capital on employee attitudes, behaviors, and performance. Human Resource Development Quarterly, 22(2), 127–152
    6. Luthans, F., Avey, J. B., & Patera, J. L. (2008). Experimental analysis of a web-based training intervention to develop positive psychological capital. Academy of Management Learning & Education, 7(2), 209–221
    7. Shatté A, Perlman A, Smith B, Lynch WD. The Positive Effect of Resilience on Stress and Business Outcomes in Difficult Work Environments. Journal of Occupational and Environmental Medicine. 2017 Feb
    8. Journal of Occupational and Environmental Medicine. (2010). The relationship between resilience and workplace outcomes in a large sample of employees. Journal of Occupational and Environmental Medicine, 52(7), 698–706.
    9. Gallup State of the Global Workplace 2025 https://www.gallup.com/workplace/349484/state-of-the-global-workplace.aspx
    10. Employee Benefit News article covering Ben & Jerry’s employee programs: https://www.benefitnews.com/news/ben-jerrys-serves-up-online-curriculum-to-employees
    11. Official Ben & Jerry’s website detailing their values and hiring philosophy: Ben & Jerry’s Values
    12. Harvard Business School case study Ben & Jerry’s Homemade Ice Cream, Inc.: Keeping the Mission(s) Alive https://www.hbs.edu/faculty/Pages/item.aspx?num=12290
    13. Salesforce Ohana Culture blog: https://www.salesforce.com/blog/salesforce-and-hawaii/
    14. Dr Shabana Azami, “Fostering Employee Engagement and Retention through Ohana Culture: A Case Study of Salesforce”, Kronika Journal(Issn No-0023:4923) Volume 24 Issue 7 2024
    15. How Google Uses Mindfulness For Success by Upstack: https://upstackhq.com/blog/engineering-management/how-google-uses-mindfulness-for-success
    16. Google re:Work https://rework.withgoogle.com/intl/en/guides/understanding-team-effectiveness#foster-effective-team-behaviors
    17. Podcast by Capacity Interactive: Inside Google’s Employee Mindfulness Program https://capacityinteractive.com/podcast/inside-googles-employee-mindfulness-program/
    18. IBM Building resiliency: Keeping skills at the core https://mediacenter.ibm.com/media/Building+resiliency%3A+Keeping+skills+at+the+core/1_tn7zz3hp/22694252
    19. IBM analysis https://www.ibm.com/think/insights/how-to-improve-employee-experience-and-your-bottom-line
    20. Article by i4CP on Microsoft’s Growth Mindset: https://www.i4cp.com/productivity-blog/growth-mindset-kathleen-hogan-on-how-microsofts-culture-continues-to-drive-innovation-and-high-performance
    21. Microsoft on Growth Mindset: https://www.microsoft.com/en-us/microsoft-365/business-insights-ideas/resources/grow-your-business-with-a-growth-mindset
    22. How Adopting a Growth Mindset Transformed Microsoft https://neuroleadership.com/podcast/growth-mindset-microsoft
  • The Age of Cybercrime: Lessons from a Data Heist and a Tech Support Scam

    The Age of Cybercrime: Lessons from a Data Heist and a Tech Support Scam

    Introduction

    In summer 2025, two seemingly unrelated cyber incidents made headlines. In the United States, insurance giant Allianz Life revealed a personal data breach affecting its 1.4 million American customers. Days later, Indian police raided a fake “Microsoft Support” call center in Noida, arresting 18 people for an international tech support scam that had duped unwitting victims (primarily in the U.S.) out of thousands of dollars.

    Though vastly different, one, a high-tech data heist targeting a major corporation, the other a low-tech con targeting everyday computer user – both underscore a new age of cybercrime that is blurring the lines between corporate security threats and consumer fraud. The common thread: cybercriminals are exploiting trust at every level.

    In this part, we unpack both cases and analyze what they reveal about today’s cyber threat landscape. We’ll explore what cybersecurity means for mid-sized companies, how leaders can strengthen defenses, protect customers, and their reputations in the face of these modern threats.

    The Allianz Data Breach – A Corporate Wake-Up Call

    On July 16, 2025, Allianz Life Insurance Company fell victim to a cyber breach via social engineering. The attackers tricked access to a third-party cloud-based Customer Relationship Management (CRM) system, proving once again that the human element is often the weakest link in security.

    Once inside the CRM, the intruders were able to steal personally identifiable information (PII) related to the majority of Allianz Life’s 1.4 million U.S. customers, along with financial professionals and employees. The Company discovered the incident one day after it occurred and notified authorities by July 25, 2025, with informing affected consumers by August 1.

    All signs point to a known hacking group leveraging voice-phishing (vishing) tactics. In fact, just a month prior, Google had warned about a ransomware group (tracked as UNC6040, informally known as “The Com”) that specializes in vishing campaigns aimed at compromising organizations’ CRM instances for large-scale data theft and extortion. One infamous subset of this group, Scattered Spider, had even breached Australia’s Qantas Airways via a third-party platform using similar social engineering tricks.

    Investigators suspect this same group may be behind the Allianz breach. If true, beyond the immediate breach, the Company could be drawn into a ransom negotiation under the gun of public data exposure.

    This incident is a lesson that cybersecurity isn’t just about firewalls and encryption alone but equally about people and third-party risks. The breach also illustrates how cybercriminal groups today arewell-organized and research-driven, going after high-value cloud platforms that aggregate massive troves of data. The fallout for Allianz will likely include costly notifications, possible regulatory fines, and damage to customer trust,a cautionary tale for any business handling sensitive data.

    The Fake Tech Support Scam- Trust Exploited at Scale

    In Noida, India, posing as “Microsoft technical support”, a group of fraudsters ran a tech support scam targeting mostly U.S. victims. The scammers acquired contact information through associates in America. For six months, they used phishing emails as warning recipients of a supposed bug or virus in their system and urged them to contact the provided tech support immediately.

    The victims were redirected (via VoIP) to the fake call center where the fraudsters, posing as Microsoft experts, walked the victims through installing a remote-access tool on their PC, under the pretense of helping diagnose the issue. With remote access, the scammers deployed malware and fake warning prompts.

    The victims were coerced into purchasing “security software” or support packages, costing between $250 – $5,000, to “fix” nonexistent problems. Payment was accepted via Zelle money transfer or cryptocurrency, making it harder to trace. Once the money was transferred, some were left with actual malware for future exploitation.

    This isn’t one-off, FBI ranks tech support scams as the third costliest U.S. cybercrime in 2024, totaling $1.46 billion. It’s striking how organized and large-scale they have become. For businesses, it’s a stark reminder thatfraudsters may exploit your brand to harm your customers or breach your systems through unwitting employees.

    Article content

    Modern Cybercrime Landscape: Key Traits of the New Age

    These two case studies raise the question: What are the defining traits of the new age of cybercrime era that businesses need to grasp?

    Social Engineering at Scale

    Both attacks succeeded by tricking humans, not systems. Whether it was phishing, vishing, or phone scams, social engineering is at the core. Mid-sized businesses are often deluged by such attacks with their employees 350% more likely to be targeted than those at larger enterprises.

    Cybercrime-as-a-Service

    Today’s cybercriminals operate like enterprise organizations. Groups like Scattered Spider/The Com run specialized operations with defined roles; or scams like Noida’s fraud call center business with managers, employees, scripts, and a supply chain for victim leads. A booming “crime-as-a-service” ecosystem allows cybercrime to scale dramatically.

    Extortion and Multi-Faceted Attacks

    Cybercriminals are combining tactics such as malware, fraud, data theft, and extortion to maximize their payoff. Many ransomware attacks today also steal data before encrypting systems, creating a double jeopardy scenario (pay to unlock your files and pay to prevent a leak). Even pure data breaches like Allianz’s case often segue into ransom demands.

    On the flip side, fraud operations like the tech support scam show how attackers focus on financial extortion of individuals, but could just as easily deploy malware during those interactions to enable further crimes. Businesses must be prepared multi layered fallout: data privacy issues, financial losses, and reputation damage.

    Global and Cross-Border in Nature

    Cybercrime is now borderless. The Noida call center scam targeted Americans from India; the data breach of a German-based insurer’s US subsidiary may involve global actors. Law enforcement’s jurisdictional limits often play to the attackers’ advantage. However, global cooperation is improving.  Business leaders are recognizing the scale of such operations and adjust their threat models for actors beyond traditional profiles.

    Third-Party and Supply Chain Vulnerabilities

    Often, breaches begin through a compromised third-party environment that potentially has weaker security or accessible credentials.. Mid-sized firms, who often rely on third-party cloud services or managed IT providers, need to scrutinize those partners’ security postures and have contingency plans if a vendor is compromised.

    Article content

    These trends mean that assuming you’re too insignificant to be targeted is a dangerous myth. The next section looks at why that mindset must change and how organizations can respond.

    Implications: Why No One Gets a Free Pass

    In summary, mid-sized businesses are prime targets for cybercriminals.Valuable yet often vulnerable. Leadership must treat cybersecurity as a core business risk, not just an IT issue. Assuming “it won’t happen to us” is a costly mistake. The good news is that with the right approach and prudent investments, even resource-constrained organizations can significantly reduce their risk.

    Article content

    Building a Cybersecurity Shield: Frameworks and Strategies for Mid-Sized Firms

    Businesses can take concrete steps to build a robust cybersecurity posture, drawing on established frameworks and best practices. Here are key strategies and considerations:

    Adopt a Security Framework for Structure:

    Leverage well-known frameworks such as NIST Cybersecurity Framework with its five core functions – Identify, Protect, Detect, Respond, and Recover. This means identifying key assets and risks, safeguarding them, detecting threats early, responding effectively, and recovering quickly. Frameworks like the CIS Critical Security Controls or ISO 27001 can also be adapted to a smaller enterprise. Depending on the nature of business and the extent of cyber security threat an organization might be exposed to, a robust cyber security policy becomes a baseline.

    Foster a Human Firewall (Security Awareness)

    Technology alone won’t stop social engineering. It’s crucial to train employees regularly about phishing, suspicious calls, and scams and promote a culture where employees can report potential threats without fear and think twice before clicking or sharing sensitive info. Many breaches can be thwarted by an alert staff for instance, an employee who questions a strange request and alerts IT could thwart a BEC scam. People, once they turn into a “human firewall”, are the first & often best line of defense.

    Secure Your Technology and Third Parties

    Go beyond basics andfocus on:

    • Vulnerability management – Keep your systems, especially internet-facing ones, patched and updated. Many attacks exploit unpatched software or weak remote access settings.
    • Third-party risk management – Assess the security of the software and vendors you use. If you entrust customer data to a cloud CRM or rely on an outsourced IT provider, scrutinize their security practices, data encryption and breach history. Prepare contingency plans in case of vendor breaches with information about log audits, access management, data management; and include supply chain risk as part of your security strategy.
    • Implement Multi-Factor and Zero Trust Principles: Enable multi-factor authentication (MFA) across critical accounts and systems like email, VPNs, banking portals, and admin logins. Adopt a Zero Trust security model which means never automatically trusting any connection or user, even if they are inside your network. Verify explicitly, enforce identity checks, limit access, monitor behaviour, and segment systems to minimize damage if compromised. For example, don’t give any single user broad access to all data; segment your network and data so that if one account is compromised, the attacker can’t roam freely.
    • Incident Response and Backup: It’s wise to assume that an incident will happen. Prepare an incident response plan by creating an internal response team with clear roles, emergency contacts list (law enforcement, cyber insurance, IT forensics, etc.), and practice drills. Maintain reliable, offline and offsite data backups and test them. Ensure you have business continuity plans in case your primary systems go down – perhaps by reverting to manual processes or via secondary systems temporarily. Also, know your legal and compliance obligations: if customer data is stolen, you may need to notify within a certain timeframe.
    • Leverage External Expertise and Tools: Mid-sized organizations may lack internal resources, but can leverage outside resources to boost security.
    Article content

    As sophisticated as “cybercrime 2.0” has become, many incidents still boil down to exploiting basic weaknesses. By mastering the fundamentals and building strong defenses, mid-sized businesses can drastically improve their resilience against cyber threats. With a consistent and multilayered strategy with vigilant sentries (your people and monitoring systems), you stand a much better chance of detecting and thwarting attackers.

    Conclusion

    The tales of the Allianz data breach and the Noida tech support scam illuminate two sides of the new age of cybercrime where both high-tech and low-tech tactics thrive.  For mid-sized businesses, these are not distant threats, they are warnings. .

    There’s a silver lining, it’s that awareness is growing, and tools and knowledge to fight back are more accessible than ever. Law enforcements across borders are cooperating to take down criminal networks. By applying the right frameworks and investing in people and process (not just technology), mid-sized firms can level the playing field despite attackers’ advantages. Think of cybersecurity as an investment in your company’s longevity and trustworthiness.

    The fight against cybercrime is now a permanent fixture of doing business in the digital age. The threats will continue to evolve – tomorrow it might be an AI-driven phishing attack or a deepfake voice message from “your CEO” asking for a funds transfer. But the core defense remains the same: knowledge, preparedness, and agility. The companies that endure will treat security as a continuous journey, not a one-time fix. The new age of cybercrime is upon us, but with resilience and foresight, we can ensure it’s an age of cyber vigilance for the defenders as well.