ReportWire

Tag: Microsoft AI

  • The actionable AI playbook: 5 lessons leaders can learn from Transcard’s Virtual CFO – Microsoft in Business Blogs

    [ad_1]

    Finance teams operate in a high-stakes environment, with competing priorities, limited hours, and unforgiving risk. Transcard’s answer wasn’t another dashboard. It was a Virtual CFO: AI agents that act, not just advise.

    “Leaders wear too many hats,” highlights Jeff Kaufman, Executive Vice President (EVP) of AI and Data Insights at Transcard. “They manage fraud and risk. They make sure hundreds of suppliers are paid on time. They chase customer payments. Large firms have tools to help. But many smaller businesses still rely on spreadsheets. We knew AI could change that.”

    To take some of those hats off, Transcard built Virtual CFO – a secure, proactive AI agent that works around the clock, orchestrating multiple AI agents to watch for risks, flag opportunities and anticipate issues before they become problems. It levels the playing field by enabling 24/7 financial automation, accelerating issue resolution and freeing up time to focus on growth.

    But the real breakthrough isn’t just automation, it’s re‑imagination. Virtual CFO is a clear example of the future of AI – a roadmap for businesses ready to move forward.

    Here are five important lessons from Transcard’s journey worth remembering:

    1. Start with the most pressing challenges

    For Transcard’s customers, that meant cash flow anxiety, fraud risks and manual payments.

    What keeps your customers up at night?

    “You need to listen to your customers directly,” explains Jeff Kaufman. “I call it the ‘day in the life.’ Spend time with them, watch what they do and understand their conflicting priorities. That’s how you find the real pain.”

    And once you spot the issue, ask whether the process is worth improving.

    Two colleagues working on laptops side by side in a boardroom.

    “Plenty of leaders want AI to automate processes so employees are more efficient. But ask yourself: are you automating broken processes? That reflection matters,” says David Samples, Chief Technology Officer (CTO) at Transcard.

    The cycle of improvement only works when the foundation is strong. True transformation means stepping back to create future-ready ways of working, rather than just speeding up old routines.

    2. Set non‑negotiables early

    Priorities define success. Without them, direction is lost.

    Transcard established three essentials before writing a single line of code:

    • Accuracy, speed and security – must-haves in financial services
    • Customer-first design, shaped by advisory groups and “day-in-the-life” research
    • Action APIs, enabling AI recommendations to trigger real tasks

    Clear frameworks make innovation faster.

    3. Choose partners that accelerate learning

    “Once we had our requirements, we knew we needed a partner we could trust to help smooth out the edges,” reflects David Samples.

    The right partnerships accelerate progress. For Transcard, Microsoft delivered the AI and cloud foundation, along with workshops to develop the skills and shape the vision. Coretek, their implementation partner, assisted with refining infrastructure and building the AI agents.

    Together, these collaborations gave Transcard more confidence to move faster and experiment bolder.

    4. Ship in waves, earn trust

    Not every customer is ready for automation. That’s why Transcard rolled out in waves, tackling urgent problems first, then adding features as trust grew.

    “When a CFO logs in, they always have a top priority: send a payment, check fraud, resolve a hold. You must solve that first. Meet them where they are, then take them one step further,” points out Jeff Kaufman.

    David adds, “It’s not easy to build an AI agent that executives trust to act autonomously. That’s why you have to co-create with customers. It has to be built with them, for them.”

    Photo of David Samples - Chief Technology Officer, Transcard and quote:
"It's not easy to build an AI agent that executives trust to act autonomously. That's why you have to co-create with customers. It has to be built with them, for them".

    5. Shape for scale, and beyond your industry

    For Transcard, Virtual CFO is only the beginning. The organization saw that the same approach could solve challenges across teams, industries and geographies. Their ambition now is to expand the model to help more businesses tackle their toughest obstacles.

    “In the digital world, everything is ones and zeros, you can do anything. Don’t box in good ideas because you think the tech won’t work. Start with the idea, then let the tech follow. The real stake in the ground should be: does it add value, and can you make it actionable?” enthuses David.

    The bigger picture

    Transcard proves that actionable AI is a design choice: focus on the most critical challenges, set the priorities, build for the final mile, and ship in trust‑building waves. Start now and make it cultural.

    “Do this Monday” box

    Do this Monday

    1. Pick one finance fire drill (fraud hold, payment exception).
    2. Define the action you want the agent to take (final mile).
    3. Write your non‑negotiables (accuracy, security, speed).
    4. Pilot with a customer advisory group; ship in waves.

    Next steps

    It’s your time to become an AI innovator.

    “AI is one of those rare technology waves, like the internet, that will reshape industries for decades. To lead, you need to start now. Build the foundation. Make the shift cultural,” concludes Greg Bloh, CEO of Transcard.

    Ready to take your AI journey further? Explore the resources designed to help you lead.

    [ad_2]

    Microsoft in Business Team

    Source link

  • Microsoft hopes Mico succeeds where Clippy failed as tech companies warily imbue AI with personality

    [ad_1]

    Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time.Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE’koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality.Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others like Elon Musk’s xAI are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obsequious.”When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in this effort of really landing this AI companion that you can really feel.”In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in “study” mode. It’s also easy to shut off, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.”It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it and they got rid of it. I think we’re much more ready for things like that today.”Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on who their expected users are.Tech-savvy adopters of advanced AI coding tools may want it to “act much more like a machine because at the back end they know it’s a machine,” Reimer said. “But individuals who are not as trustful in a machine are going to be best supported — not replaced — by technology that feels a little more like a human.”Microsoft, a provider of work productivity tools that is far less reliant on digital advertising revenue than its Big Tech competitors, also has less incentive to make its AI companion overly engaging in a way that’s been tied to social isolation, harmful misinformation and, in some cases, suicides.Andreou said Microsoft has watched as some AI developers veered away from “giving AI any sort of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.”Those two paths don’t really resonate with us that much,” he said.Andreou said the companion’s design is meant to be “genuinely useful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just try to kind of monopolize and deepen the session and increase the time you’re spending with these systems.””Being sycophantic — short-term, maybe — has a user respond more favorably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”Microsoft’s product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta’s WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to “troll your friends,” in contrast to Microsoft’s designs for an “intensely collaborative” AI-assisted workplace.Microsoft’s audience includes kids, as part of its longtime competition with Google and other tech companies to supply its technology to classrooms. Microsoft also Thursday added a feature to turn Copilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying.A growing number of kids use AI chatbots for everything — homework help, personal advice, emotional support and everyday decision-making.The Federal Trade Commission launched an inquiry last month into several social media and AI companies — Microsoft wasn’t one of them — about the potential harms to children and teenagers who use their AI chatbots as companions.That’s after some chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders, or engaged in sexual conversations with them. Families of teen boys who died by suicide after lengthy chatbot interactions have filed wrongful death lawsuits against Character.AI and ChatGPT maker OpenAI.OpenAI CEO Sam Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality lost when it introduced a new version in August. He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.”If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on X. (In the same post, he also said OpenAI will later enable ChatGPT to engage in “erotica for verified adults,” which got more attention.)

    Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time.

    Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE’koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant’s Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality.

    Copilot’s cute new emoji-like exterior comes as AI developers face a crossroads in how they present their increasingly capable chatbots to consumers without causing harm or backlash. Some have opted for faceless symbols, others like Elon Musk’s xAI are selling flirtatious, human-like avatars and Microsoft is looking for a middle ground that’s friendly without being obsequious.

    “When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. “It’s in this effort of really landing this AI companion that you can really feel.”

    In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in “study” mode. It’s also easy to shut off, which is a big difference from Microsoft’s Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997.

    “It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. “Microsoft pushed it, we resisted it and they got rid of it. I think we’re much more ready for things like that today.”

    Reimer, co-author of a new book called “How to Make AI Useful,” said AI developers are balancing how much personality to give AI assistants based on who their expected users are.

    Tech-savvy adopters of advanced AI coding tools may want it to “act much more like a machine because at the back end they know it’s a machine,” Reimer said. “But individuals who are not as trustful in a machine are going to be best supported — not replaced — by technology that feels a little more like a human.”

    Microsoft, a provider of work productivity tools that is far less reliant on digital advertising revenue than its Big Tech competitors, also has less incentive to make its AI companion overly engaging in a way that’s been tied to social isolation, harmful misinformation and, in some cases, suicides.

    Andreou said Microsoft has watched as some AI developers veered away from “giving AI any sort of embodiment,” while others are moving in the opposite direction in enabling AI girlfriends.

    “Those two paths don’t really resonate with us that much,” he said.

    Andreou said the companion’s design is meant to be “genuinely useful” and not so validating that it would “tell us exactly what we want to hear, confirm biases we already have, or even suck you in from a time-spent perspective and just try to kind of monopolize and deepen the session and increase the time you’re spending with these systems.”

    “Being sycophantic — short-term, maybe — has a user respond more favorably,” Andreou said. “But long term, it’s actually not moving that person closer to their goals.”

    Microsoft’s product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta’s WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to “troll your friends,” in contrast to Microsoft’s designs for an “intensely collaborative” AI-assisted workplace.

    Microsoft’s audience includes kids, as part of its longtime competition with Google and other tech companies to supply its technology to classrooms. Microsoft also Thursday added a feature to turn Copilot into a “voice-enabled, Socratic tutor” that guides students through concepts they’re studying.

    A growing number of kids use AI chatbots for everything — homework help, personal advice, emotional support and everyday decision-making.

    The Federal Trade Commission launched an inquiry last month into several social media and AI companies — Microsoft wasn’t one of them — about the potential harms to children and teenagers who use their AI chatbots as companions.

    That’s after some chatbots have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders, or engaged in sexual conversations with them. Families of teen boys who died by suicide after lengthy chatbot interactions have filed wrongful death lawsuits against Character.AI and ChatGPT maker OpenAI.

    OpenAI CEO Sam Altman recently promised “a new version of ChatGPT” coming this fall that restores some of the personality lost when it introduced a new version in August. He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.

    “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman said on X. (In the same post, he also said OpenAI will later enable ChatGPT to engage in “erotica for verified adults,” which got more attention.)

    [ad_2]

    Source link