How about law firms becoming software providers (instead of being replaced by them)? (Part I)
An investment in external tools could hamper a firm's ability to compete
(You are reading a GenAI-free article, solely relying on auto-correct for some expressions and typos. The image above is AI-free as well, retrieved from Canva’s photo repository.)
As established law firms become inundated with LLM-powered solutions, many question the very need for lawyers - What is to stop their own customers from using the same tools? I believe such extrapolation remains flawed1 and distracts us from a more urgent debate.
I think we should be paying attention to the convergence of three particular dynamics instead:
Today’s proprietary models do not provide stable enough grounds on which to build long-term solutions that ensure differentiation. I will cover this in Part I, today: “Building a house on the back of a wounded whale”.
Convergence (of LLMs) is inevitable, and this will do away with the benefits of relying on state-of-the-art components, strengthening the case for commoditization and ensuring that all of the value remains at an even higher level of the stack, in the daily use of tools and the actual delivery of professional services. I will cover this in Part II.
While “language intelligence” leads to a common understanding of the world, it alone cannot really take us far enough in the absence of scaffolding. There is a tension between probabilistic and deterministic building blocks, which I will cover in Part III (“Bringing back the armour”).
Aiming for an actionable conclusion, I will address the role of Agentic AI in solving this tension by modularizing every component and transforming every “job to be done” into a mixed workflow. The service becomes software insofar as it relies on automation, but software itself has, in the process, become an integral part of the service, with little distinction between “structure” and human intervention. I will discuss this in Part IV (“How agents broker peace”).
Part I. Building a house on the back of a wounded whale
Most AI solutions being tested or acquired by law firms today (including those mentioned in our study) remain “LLM wrappers”. There is little room for surprises as it affects their offering.
Specifically, we are talking about legal research, document comparison, document generation, contract reviews, real-time transcripts/context, litigation analysis, due diligence, and other language-based aids applying various levels of automation.
Their primary advantage against all-purpose solutions provided by the large AI labs underpinning their offering boils down to calculated constraints. With underlying training datasets mostly out of reach (and control) for most, each company sitting at the top of the pyramid has applied its own “massage” to the untamed “beast” in terms of filtering, baked-in prompting, rules, access controls, audit trails, or human-in-the-loop workflows.
As expected, these solutions have also added certain “static” (ie., not LLM-driven) features of their own: access controls, audit trails, integration with common document management tools, or prompt templates.
There is an obvious risk in this fragile equilibrium for both vendors and law firms: AI labs are losing money at unprecedented rates (OpenAI loses $667m each month) and keep climbing up their own stack to tap into any possible source of future revenue. They are definitely capable of cutting off these vertical integrators (either through their control of available tokens or by channeling their massive user base towards alternative niche offerings). Not to mention their ability to learn fast from all of the work happening in their own servers (as these solutions will, with very rare exceptions, relay their prompts to a lab’s API for inference), eventually commoditizing vendors and flattening much of the value that makes each firm unique.
It does therefore sound quite sensible to avoid both the larger AI labs and any tools sitting on top of their proprietary models. This means leveraging open weight models, whether on public clouds (e.g., Amazon Web Services) or each organization’s own servers (a private cloud running the firm’s GPUs). This may sound adventurous, as well as extremely expensive -after all, isn’t everyone free-riding on OpenAI?-, but we are already doing it, and it does not seem to be the case2.
Stay tuned for Part II (“Convergence is inevitable”).
See page 9 in our recent paper, “When your competitor isn’t a law firm”.
Based on our preliminary analysis (joint internal deployment with our technology partners and portfolio companies, own servers), we are incurring slightly over $5,000 in monthly cost of ownership when running various Qwen and Deepseek models on six H100 GPUs and projecting hardware (including optimized RAM, CPUs, and other resources) as well as energy costs over five years. These models and usage levels have so far proven sufficient to feed the solutions that we’ll be discussing in future blog posts.