Close Menu
    Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Instagram
    Crypto Celtic
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • Crypto for Beginners
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Crypto Celtic
    Home»AI News»Definity embeds agents inside Spark pipelines to catch failures before they reach agentic AI systems
    Definity embeds agents inside Spark pipelines to catch failures before they reach agentic AI systems
    AI News

    Definity embeds agents inside Spark pipelines to catch failures before they reach agentic AI systems

    April 29, 20265 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    kraken



    For most data engineering teams, managing pipeline reliability often means waiting for an alert, manually tracing failures across distributed jobs and clusters, and fixing problems after they've already hit the business. Agentic AI needs the data to be there, clean and on time. A pipeline that fails silently or delivers stale data doesn't just break a dashboard — it breaks the AI system depending on it.

    That gap is what Definity, a Chicago-based data pipeline operations startup, is building into: embedding agents directly inside the Spark or DBT driver to act during a pipeline run, not after it. One enterprise customer identified 33% of its optimization opportunities in the first week of deployment and cut troubleshooting and optimization effort by 70%, according to Definity. The company also claims customers are resolving complex Spark issues up to 10x faster.

    "You need three big things for agentic data operations: full stack context that is real time and production aware. Control of the pipeline. And the ability to validate in a feedback loop. Without that, you can be outside looking in and read only," Roy Daniel, CEO and co-founder of Definity told VentureBeat in an exclusive interview.

    The company on Wednesday announced that it has raised $12 million in Series A financing led by GreatPoint Ventures, with participation from Dynatrace and existing investors StageOne Ventures and Hyde Park Venture Partners.

    coinbase

    Why existing pipeline monitoring breaks down at scale

    Existing tools approach the problem from outside the execution layer — Datadog, which acquired data quality monitor Metaplane last year, Databricks system tables, and platforms like Unravel Data and Acceldata all read metrics after a job completes. Dynatrace has monitoring capabilities; it also participated in Definity's Series A.

    The Definity approach is differentiated from other options in the way the solution is architected. According to Daniel, that means by the time a platform monitoring tool surfaces a problem, the pipeline has already run — and the failure, the wasted compute or the bad data is already downstream.

    "It's always after the fact," Daniel said. "By the time you know something happened, it already happened."

    How Definity's in-execution agents work

    The core architectural difference is where the agent sits — inside the pipeline rather than watching from outside it.

    Inline instrumentation. The Definity system installs a JVM agent directly inside the pipeline execution layer via a single line of code, running below the platform layer and pulling execution data directly from Spark.

    Execution context during the run. The agent captures query execution behavior, memory pressure, data skew, shuffle patterns and infrastructure utilization as the pipeline runs. It also infers lineage between pipelines and tables dynamically — no predefined data catalog is required.

    Intervention, not just observation. The agent can modify resource allocation mid-run, stop a job before bad data propagates or preempt a pipeline based on upstream data conditions. Daniel described one production deployment where the agent detected that an upstream job had been preempted and the input table it was supposed to write was stale — and stopped the downstream pipeline before it started, before bad data reached any dependent system.

    What is and isn't real time. Detection and prevention are real time. Root cause analysis and optimization recommendations run on demand when an engineer queries the assistant, with full execution context already assembled.

    Overhead and data residency. The agent adds approximately one second of compute on an hour-long run. Only metadata transmits externally; full on-premises deployment is available for environments where no metadata can leave the perimeter.

    What in-execution intelligence looks like in a production environment

    One early user of the Definity platform is Nexxen, an ad tech platform running large-scale Spark pipelines  for mission-critical advertising workloads, running on-premises.

    Dennis Meyer, Director of Data Engineering at Nexxen, told VentureBeat that the core problem he was facing was not pipeline failures but the accumulating cost of inefficiency in an environment with no elastic cloud capacity to absorb waste.

    "The main challenge wasn't about pipelines breaking, but about managing an increasingly complex and large-scale environment," Meyer said. "Because we operate on-prem, we don't have the flexibility of instant elasticity, so inefficiencies have a direct cost impact."

    Existing monitoring tools gave Nexxen partial visibility but not enough to act on systematically. "We had existing monitoring tools in place, but needed full-stack visibility to understand workload behavior holistically and to systematically prioritize optimizations," Meyer said.

    Nexxen deployed Definity with no pipeline code changes. According to Meyer, the team identified 33% of its optimization opportunities within the first week, and engineering effort on troubleshooting and optimization dropped by 70%. The platform freed infrastructure capacity, allowing the team to support workload growth without additional hardware investment.

    "The key shift was moving from reactive troubleshooting to proactive, continuous optimization," Meyer said. "At scale, the biggest gap often isn't tooling — it's actionable visibility."

    What this means for enterprise data teams

    For data engineering teams running production Spark environments, the shift from reactive monitoring to in-execution intelligence has architectural and organizational implications worth thinking through.

    Pipeline ops is becoming an AI infrastructure problem. Data pipelines that previously supported analytics now carry AI workloads with direct business dependencies. Failures that were once an inconvenience are now blocking production AI delivery.

    Troubleshooting time is a recoverable cost. According to Meyer, Nexxen cut engineering effort on troubleshooting and optimization by 70% after deploying Definity. For teams running lean, that time going back to the roadmap is the most direct near-term case for evaluating this category.



    Source link

    zkp
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    The evolution of encoders: From simple models to multimodal AI

    April 28, 2026

    Meta AI Releases Sapiens2: A High-Resolution Human-Centric Vision Model for Pose, Segmentation, Normals, Pointmap, and Albedo

    April 27, 2026

    The Most Efficient Approach to Crafting Your Personal AI Productivity System

    April 26, 2026

    MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News

    April 25, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Latest Posts

    Skipping These AI Basics Will Cost You Everything

    April 29, 2026

    Make AI Podcast Motivational Videos Using Free AI Tools (Beginner Friendly)

    April 29, 2026

    Bitcoin heads into Fed decision today at the exact price where its strongest holders may finally sell

    April 29, 2026

    ZetaChain Dismissed Bug Report That Could Have Prevented $334K Exploit

    April 29, 2026

    Dunamu, Hana Financial Take Blockchain Remittance System Live With POSCO

    April 29, 2026
    coinbase
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    DeFi Exploits Push Builders to Rethink Emergency Controls

    April 30, 2026

    XRP IS ABOUT TO BE SHAKEN!? THIS IS SERIOUS… BUCKLE UP!

    April 29, 2026
    coinbase
    Instagram
    © 2026 CryptoCeltic.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.