ReportWire

Tag: ai images

  • Are Your Employees Using AI to Create Fake Expense Receipts?

    [ad_1]

    Countless business owners now use artificial intelligence tools to automate work tasks, boost productivity, and lower their costs. But some employees are also finding ways to use chatbots to enrich themselves on their employer’s dime, as more workers are turning in business expense reports padded with fake receipts created by online AI apps.

    Multiplying media reports and social-platform posts note the increased use of phony AI-generated restaurant, hotel, and transportation bills by less than upright employees, who ask their companies to reimburse those fictive outlays as work expenses. This small-scale fraud often depends on easily accessible, and often free, online chatbots that no employer would ever want used for low-level grifting. But whether they’ve been tipped off to the practice by online messages or discovered it themselves through experimentation, a growing number of people have learned that platforms like ChatGPT can quickly serve up a bogus lunch chit to foist on an unsuspecting boss for repayment.

    It’s still not clear how many businesses are being swindled by this relatively recent scam—or how much those scams costing employers. Remote payments news site Pymnts recently released a study finding “68 percent of organizations encountered at least one fraud attempt” through their accounts payable services, including fake employee receipt submissions.

    This form of grifting isn’t new. For decades, some incorrigible employees have used applications like Photoshop to doctor receipts collected by other people to appear as their own business expenses, and other pre-AI software lets people create faux invoices from scratch.

    But the flow of authentic-looking and hard to detect AI fakes may increase soon. Word continues to spread rapidly about how authentic looking chatbot-created forgeries look—and how difficult it is for employers to spot them before reimbursing funds employees never spent.

    “You can use [ChatGPT] 4o to generate fake receipts,” noted tech sector employee Deedy in a March post on X. “There are too many real world verification flows that rely on ‘real images’ as proof. That era is over.”

    Just how easy is it to create a sham proof of payment slip?

    One AI novice reporter—who started in print media back when publications still used telexes as communications tech—managed to create a passable first attempt fake receipt in under a minute. Gaining access to a more powerful version of the same free chatbot—and refining the input prompt to specify restaurant location, and the last numbers of a real credit card to be used in the sham bill—would have likely resulted in an entirely convincing forgery. The initial results are already impressive:

    That ease and effectiveness of using AI to make bogus receipts is already causing some employers to revert to old-school accounting methods to confound digital expense frauds.

    “Lock it up, and get out,” said BB_Fin on a Reddit thread titled “ChatGPT now allows the creation of photorealistic fake receipts” earlier this year. “We’re going to a full paper based system again. The future is the past.”

    But there are more modern ways to battle the problem for employers willing to pay for them.

    Tech companies including Expensify, SAP Concur, and AppZen already have or are developing tools to spot AI-generated fake receipts. Those apps aim to rectify one of the biggest flaws that allows forgeries to sneak through: Automated AI platforms used to vet submitted employee expense accounts are often unable to identify the fraudulent bills created by the same or a similar app.

    In response, new products are siccing multiple AI agents on submitted digital receipts to catch telltale tip-offs in fakes. Those include the metadata fingerprints that bots are programmed to leave on images they generate, mathematical errors chatbots can make on fake chits, and their occasional hallucination that puts incompatible items like dry cleaning or taxi charges on a restaurant bill.

    “[O]ur Mastermind AI models work together, creating a platform of checks and balances,” wrote AppZen co-founder and CTO Kunal Verna on LinkedIn last April. “Where one model might miss a forgery, another catches it. This layered defense system is crucial because there’s no single ‘silver bullet’ for detecting the latest AI-generated fakes.”

    Social-media users report that continued development of widely used AI assistants like Copilot are also starting to flag fake receipts they’ve been asked to analyze.

    “Copilot: The receipt appears to be fake,” noted imnotokayandthatso-k in the same Reddit threat on forged receipts. “The listed items and prices are unusual and do not match the typical offerings and prices at Texas Roadhouse…. These items are not found on the official Texas Roadhouse menu. The prices are also extremely high compared to the usual prices for dishes at Texas Roadhouse.”

    For that reason, people claiming to be practitioners of pre-AI receipt forgeries say they’ll stick with software and hard-copy printouts of fake receipts they create by themselves.

    “Receipt printer from AliExpress: $20,” redditor DutchTinCan said in the Reddit thread. “Photoshop license: $89. Unlimited expense receipts: Priceless.”

    Except that’s not so for their employer, who’s shelling out money to reimburse those phony expenses.

    [ad_2]

    Bruce Crumley

    Source link

  • Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos | TechCrunch

    Meta changes its label from ‘Made with AI’ to ‘AI info’ to indicate use of AI in photos | TechCrunch

    [ad_1]

    After Meta started tagging photos with a “Made with AI” label in May, photographers complained that the social networking company had been applying labels to real photos where they had used some basic editing tools.

    Because of the user feedback and general confusion around what level of AI is used in a photo, the company is changing the tag to “AI Info” across all of Meta’s apps.

    Meta said that the earlier version of the tag wasn’t clear enough for users to indicate that the image with the tag is not neccesarily created with AI, but might have used AI-powered tools in the editing process.

    “Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context. For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled ‘Made with AI’,” the company said in an updated blog post.

    Image Credits: Meta

    The company is not changing the underlying technology for detecting use of AI in photos and labeling them. Meta still uses information from technical metadata standards such as C2PA and IPTC that include information about use of AI tools.

    That means, if photographers use tools like Adobe’s Generative AI Fill to remove objects, their photos might still be tagged with the new label. However, Meta hopes that the new label will help people understand that the image with the tag is not always created entierly by AI.

    “‘AI Info’ can encompass content that was made and/or modified with AI so the hope is that this is more in line with people’s expectations, while we work with companies across the industry to improve the process,” Meta spokesperson Kate McLaughlin told TechCrunch over email.

    The new tag will still not solve the problem of completely AI-generated photos going undetected. And it won’t tell users about how much AI-powered editing has been done on an image.

    Meta and other social network will need to work to set guidelines without being unfair to photographers who have not made alterations to their editing workflows, but the tools they used to touch up photos have some generative AI element. On the other hand, companies like Adobe should warn photographers that when they use a certain tool, their image might be tagged with a label on other services.

    [ad_2]

    Ivan Mehta

    Source link