Over the past year, the conversation around artificial intelligence has shifted from novelty to necessity. In global financial centres, the question is no longer whether AI will reshape institutional workflows, but rather how fast, how safely, and on what. In India, this transition is occurring alongside a once-in-a-generation expansion of capital markets, new retail participation, and massive digitisation of financial data.
At Thurro, working closely with asset managers, equity research teams, investment advisors, regulators, and wealth platforms has given us a front-row seat to how AI systems behave in the real world. Beneath the excitement and rapid experimentation, a set of deeper structural learnings has emerged, lessons that matter not only for us as a company, but for anyone thinking seriously about AI + Finance + India.
These are practical learnings. They come from observing how models behave at scale, how analysts interact with probabilistic systems, and how institutional requirements around security, workflow rigidity, and data consistency shape what is possible.
Below are five important lessons we have learnt this year.
Data foundations matter most
For decades, computing had a simple organising principle: GIGO, or garbage in, garbage out. If the data was poor, the output would be poor. In the LLM era, this principle becomes even more critical. Models today ingest millions of tokens, draw correlations across vast corpora, and operate at layers of abstraction that traditional software never touched. This magnifies even small errors upstream.
A model may not hallucinate at all; it may simply inherit an incorrect earnings number from a brokerage report, a misclassified cash-flow line item, or an outdated management commentary. When the underlying data is inconsistent, the model’s reasoning chain becomes incoherent.
This is why we treat data architecture, data cleaning, and ongoing data management as a core product. The model is an interpretive layer on top of this foundation. In many cases, the constraint is not model performance but whether the underlying dataset is correct, complete, and consistently structured across time.
In financial services, the standard of output required is very high. A single mislabeled ratio, an inconsistent earnings number, or an incorrectly tagged cash-flow metric can ripple into an entirely wrong conclusion. Over time, the industry will recognise that the defensible moat is not model size or latency. It is data quality, data governance, and data consistency across time.
Probabilistic systems need expectation management
For more than 25 years, financial professionals have interacted with software that behaves predictably. Press a button; get the same result every time. This shaped user expectations around reliability, auditability, and control. That mental model is deeply embedded.
Traditional systems are deterministic. If you feed the same inputs into the same program, you get the same output every time. LLMs do not behave that way. They are probabilistic engines. Using the same prompt does not guarantee identical outputs across runs. Even when outputs are stabilised using fixed sampling parameters (such as temperature and nucleus sampling), deterministic decoding strategies, consistent prompt structures, and seeded generation where available, the model’s inference process still involves probabilistic token sampling rather than rule-based execution.
For the clients—alternate asset managers, equity research teams, investment advisors, regulators, and wealth platforms—this is a structural shift. Consistency is not a nice-to-have; it is a compliance requirement. Analysts need to know why a conclusion was reached. Portfolio managers need traceability. Risk teams need measurable variance.
The challenge, therefore, is not only technical; it is behavioural. It is an expectation-management challenge. Systems must be designed to minimise variance where possible, explain sources of truth, and enforce guardrails so that outputs remain stable across repeated use. This is also where a well-curated internal data lake matters: models grounded in a bounded, authenticated corpus are far less prone to drifting or pulling in spurious context than systems allowed to roam the open web. Users must be educated to understand that probabilistic does not mean unreliable; it means the system captures patterns differently from deterministic software.
AI will not replace research workflows, but it will reshape how analysts think about consistency, documentation, and interpretability. Institutions that recognise and adapt to this shift early will be better positioned than those that assume past expectations will simply carry forward.
Workflow integration determines real adoption
One of the clearest observations from the past year is that the market has shifted decisively away from “do-it-yourself” tools. Analysts do not want a new tab, a new interface, or a standalone chatbot. They want AI embedded inside the workflows they already trust.
This requires clarity on where AI should sit, how it should behave, and how deeply it should integrate into research, reporting, and decision-making cycles. It also requires a practical understanding that most institutions prefer a “do it for me” approach—where the system is configured, tuned, and maintained for them rather than leaving each team to build its own pipelines.
For us, this has meant working more closely with clients to help shape not only the model outputs but also the way those outputs fit into their daily processes. The goal is not to deliver an AI tool; it is to deliver an AI-augmented workflow.
The companies that succeed in AI for finance will not be those that build the most features. They will be those that integrate most seamlessly into the client’s day.
Enterprise-grade security is non-negotiable
Security has always been central to financial services, but AI systems change the risk profile in observable ways. They centralise data that was previously distributed, rely on cloud-based integrations and APIs, and introduce new exposure points such as retrieval layers and embeddings. At the same time, the cost and automation of attacks have fallen. Ransomware-as-a-service is now a commercial product; for a few dollars, attackers can rent malware, harvest credit-card data, run automated OTP-capture scripts, and exit without leaving a trace. This is the background against which AI platforms must operate.
AI systems amplify the importance of security because they touch sensitive data in new ways. They ingest it, transform it, embed it, and sometimes generate new structures from it. Every step must be controlled to ensure client confidentiality and regulatory compliance.
Curation is the only path to consistency
A less obvious but equally important learning is the role of curation. Relying on open media like general news publications or unfettered social media content can introduce inconsistency and noise. The problem is compounded by the growing volume of AI-generated, unverified content in the public domain, which can enter training or retrieval pipelines without clear provenance. When large language models ingest unauthenticated or weakly sourced material, errors are not just repeated, but can get normalised and amplified across outputs. Numbers change, interpretations differ, and sometimes errors propagate unnoticed. For financial analysis, inconsistency is costly. When you are comparing management commentary across quarters, or tracking changes in capex guidance, or reconciling margin commentary with cash-flow disclosures, the smallest variation in source quality can lead to incorrect conclusions.
This is why we have always relied on curated, stable, primary sources: filings, conference call transcripts, regulator disclosures, company-released documentation, and vetted databases. Curation creates an environment where every data point is traceable, every trend is comparable, and every insight rests on a stable foundation. In a probabilistic system, consistency of input becomes the anchor of reliability.
Discipline is the differentiator
The next phase of AI in finance will be defined by discipline, not experimentation.
AI in finance is often discussed in terms of model size, capability, or speed of innovation. But the real work — the work that determines whether a system will be trusted, adopted, and used meaningfully — lies elsewhere. It lies in data discipline, expectation management, workflow depth, enterprise security, and source curation.
As we look ahead, the winners in this space will not be those who simply deploy AI, but those who engineer trust, reliability, and contextual intelligence into every layer of the stack.
Unlock the power of alternative data
Do not just follow the market — stay ahead of it. Thurro helps you transform raw filings and alternative datasets into actionable insights.
Explore Thurro AltData Book a demo
