ReportWire

Tag: Data Governance and Management

  • What type of data is needed to find opportunities | Insights | Bloomberg Professional Services

    [ad_1]

    For instance, State Street now generates a continuous stream of inflation data for its clients, merging traditional indicators with alternative data sources like observed consumer spending and digital news. “If we look at central banks and interest rate projections, you can use this information set to address gaps when the central bank isn’t speaking,” says Clark. “What is the rhetoric around a central bank? Our research shows that this can have implications for forecasting yields. More interestingly, you’re getting breadth in perspective and incremental alpha outside of periods where conventional data sources are available.” 

    Notably, Bloomberg offers alternative data solutions to its clients via Bloomberg Terminal and data feeds, and these include consumer transaction data analytics from Bloomberg Second Measure and foot traffic data analytics from Placer.ai, as well as Similarweb’s web traffic data. 

    How data drives discretionary versus systematic processes 

    Discretionary managers now analyze a broader universe of securities due to scalable data infrastructure, while quant teams increasingly translate unstructured data into structured signals for models. These trends support the convergence of systematic and discretionary styles, driven by AI enhanced research workflows and improved data engineering practices. 

    “From a discretionary point of view, we’re seeing discretionary managers able to look over a much broader breadth of names because they’ve got the scalability to gain insight from that data. They’re able to pick out things they never could before,” says Tushara Fernando, Head of Data and Machine Learning at the Man Group. “From a systematic point of view, we’re able to translate and quantize unstructured data into more structured data that we can use in our quant models.”    

    Indeed, the proliferation of data and AI tools has pushed discretionary and systematic approaches towards convergence, says Systematica’s Dooms. “Discretionary managers get a lot of benefit from GenAI tools in terms of adding code to their process, making it more systematic. Systematic investors get to use data traditionally in the human realm – unstructured data – and parse it into signals,” he explains, adding, “Under the hood, there’s a lot of work to get that process right: how do you shape and architect the data to make it consumable by AI workflows?” 

    The rise of agentic workflows 

    Zooming in on agentic AI, experts point to the technology’s early progress in enabling practical, tool-driven workflows. Says Dooms, “To me, agentic AI is not just about chain-of-thought and automation – that’s table stakes. Your basic ChatGPT-style chatbot does planning and thinking. It’s really about tool-calling: architecting processes where you can identify things you were not able to do before and can now do thanks to scalability and then presenting data so it can be used by an LLM.”  

    State Street’s Clark cites his own organization’s internal analytics capability as a prime example of agentic AI’s potential. “We’ve got different data sources – our own research, structured and unstructured data – and we’ve got agents querying tools to trigger further actions: generating investment insights for clients or internal stakeholders, triggering signals for capital markets settings, etc.,” he explains, “We’re not at the end state where that’s fully implementable, but we’re well into that pathway.” 

    Why democratizing data access is essential for scalable investment processes 

    Implementation of cutting-edge data and AI tools still requires human input. Indeed, making the same data available to everyone from entry-level employees to the C-Suite is one key to success. “It’s incredibly important for us to provide an infrastructure for traders, junior traders, desk-side analysts to have access to all the data we have in a seamless fashion, and to provide them with low-code or no-code solutions so they can play with their own data and derive insight,” says the Man Group’s Tushar.  

    “We want to give them a platform to do their own testing and back testing, analyze their flows and profitability, and tell us how to be more proactive, having done some of the work themselves. That’s key to scaling up our contribution,” he adds. 

    State Street’s Clark agrees with this statement, observing, “I think the big innovation is that data is now for everyone. The notion that you’re a decision maker but someone else handles data and insight is dead. Building data literacy is the big innovation.”  

    Interested in more insights from Bloomberg Enterprise Tech & Data Summit 2025 in London, click here. Learn more about Bloomberg Enterprise Tech & Data solutions here

    Insights in this article are based on panels and fireside discussions at the Enterprise Tech & Data Summit held in London in November 2025. 

    [ad_2]

    Bloomberg

    Source link

  • How companies integrate private market data at scale? | Insights | Bloomberg Professional Services

    [ad_1]

    What’s driving the push to integrate private market data with public market workflows? How are investors and asset managers addressing persistent data frictions to create a unified view across portfolios? 

    This episode of Market Dialogues features Leila Sadiq, Global Head of Enterprise Data Content at Bloomberg, in discussion with Todd HirschHead of Private Capital at Point72, Mark NeelyDirector of Alternative Investments at GenTrust, and Avi TuretskyPartner and Head of the Quantitative Research Group at Ares, on how private markets are evolving toward greater transparency, valuation consistency, and data connectivity. 

    Source: Enterprise Data & Tech Summit, October 16, 2025, New York

    The Market Dialogues podcast series provides access to curated, thought-provoking discussions from Bloomberg global events. It offers in-depth insights from experts on key trends and themes driving the markets today and beyond. 

    Discover more conversations in the Market Dialogues series here. 

    Featured insights from this episode of Market Dialogues: 

    On technology changing valuation transparency

    Todd Hirsch: Technology has enabled us to have much greater frequency of inputs for valuations. Whether they’re public or private… I think you can have a better sense of how often you want to mark your positions and how frequently you need to adjust those marks based on the inputs you have.

    When new information comes in, it’s important that it gets incorporated. So if a building sells down the street and you now know there’s a new comparable, you can incorporate that in real time. The frequency of adjustments on the private side today is much better than it has ever been, and transparency for investors continues to improve as that frequency increases, and the supporting data and analytics become stronger.* 

    On the data gap between public and private companies

    Avi Turetsky: We know that public markets show meaningfully higher vol[atility] than private market NAVs. We know the valuations are different: public market valuations represent marginal trades, while private market valuations are appraisal-based. We have a pretty good sense of the relationship between the two.  

    But whether privately owned companies are actually more stable than publicly traded companies, if you’re looking at revenue, EBITDA, or cash flows, as far as I know, no one knows the answer to that question…The prices are more stable… But whether the revenues are more stable to EBTIDA, no one knows because no one’s been able to get the data on large enough scale to my best knowledge. 

    On managing allocation across private and public assets 

    Mark Neely: I focus a lot of my time on the allocation model for clients, and it’s always challenging because we’re constantly comparing public and private markets and they don’t really compare. You look at private equity over a trailing two-year return, and it doesn’t track public equities over the last 10 years  

    Clients will say, “The public markets are really strong, I want to go into private,” and I’ll say, “Okay, but let’s look at what the EBITDA multiples are that private markets are purchasing at. What are things being marked at?” That transparency, or lack of it, makes it very challenging to take advantage of dislocations or to reallocate capital between private and public markets, or up and down the capital stack in direct lending, Broadly Syndicated Loans (BSLs), and term loans.” 

    *Quotations have been edited for brevity and clarity.

    [ad_2]

    Bloomberg

    Source link

  • AI data center workload pivot favors databases over applications | Insights | Bloomberg Professional Services

    [ad_1]

    Slowdown in application workloads ahead

    As AI agents automate more steps in everyday workflows, less of that work needs to run inside large application suites. That points to slower growth in data-center demand for enterprise resource planning, customer relationship management, human capital management and supply-chain management software. Reasoning-model agents and deep research tools can now autonomously browse the web, pull sources and run analyses on their own — tasks that previously lived in those apps’ user interfaces.

    Engineering software — computer-aided design and computer-aided manufacturing — may skirt these headwinds, as simulation and synthetic-data creation keep workloads anchored in specialized tools.

    Coding agents supercharge testing workloads

    AI coding agents — assistants inside developer tools that suggest, write and fix code — should give a big boost to application development and testing workloads. Agents from Cursor, Anthropic’s Claude Code, GitHub Copilot, OpenAI’s Codex and Gemini Code Assist handle tasks like debugging and appending to existing code. Companies report 30-40% productivity gains on new code written with these agents, which should channel more development and testing to AI data centers. Prompt-based code generation is quickly becoming one of the most-used generative-AI features in existing business applications.

    Workload by Accelerator Type

    Content delivery, cybersecurity also benefit

    As autonomous AI agents plug into business workflows, more mission-critical tasks will run in AI data centers. The rise of reasoning models like OpenAI’s o3 shifts the focus to ensuring that infrastructure is fast, efficient and reliable from simply having a model. That’s a tailwind for content delivery networks (CDNs) from companies like Cloudflare and cybersecurity providers such as Zscaler. Most companies seek to integrate internal knowledge databases and documentation with LLMs while relying on CDN and cybersecurity vendors to manage token consumption for LLM fine-tuning and inferencing.

    Content Delivery Vertical Growth

    [ad_2]

    Bloomberg

    Source link