ReportWire

Tag: llm

  • GARY AI Receives ATO for IL5

    Generative AI Agent for DoD Customers Delivers 10-50x Productivity from Day One

    CORAS, the only Agentic AI system in the Department of Defense (DoD), today announced that its proprietary and embedded Large Language Model (LLM) GARY has received its ATO for IL5. GARY is a unique Generative AI agent that drives 10-50x productivity and provides departments with a powerful, defense-grade digital ally that can be used within DoD environments with confidence. GARY will be unveiled at the U.S. Air Force’s DAFITIC Education and Training Event on August 25, 2025, in Montgomery, Alabama.

    CORAS is constantly evolving to meet the needs of its DoD customers. GARY is a digital assistant within the CORAS platform that redefines efficiency and navigation support for defense professionals by delivering robust, specific, and goal-oriented capabilities. GARY is an AI orchestrator that aggregates customer data and activates an ecosystem of Agentic AI agents designed to perform real work, add structure, reasoning and drive execution in real time.

    • GARY is secure, sovereign AI, built with Claude via AWS Bedrock.

    • GARY drives dynamic decisions with actionable clarity, AI-orchestrated options, and is context-aware.

    • GARY’s outputs provide full transparency, resource and logic tracing, and auditability.

    • GARY is fast to deploy. Easy to use. No code needed.

    • GARY is a digital partner that actively transforms vast streams of data into actionable intelligence, not a bot that merely answers questions.

    • GARY teaches and trains its users in how to prompt/query more effectively.

    “DoD departments can now use GARY to do in minutes what has historically taken weeks or months,” said Dan Naselius, President and CTO of CORAS. “We know that GARY is a game-changer for optimizing the DoD workforce. GARY can write a brief, analyze charts and graphs, generate complex models, create maps and reports, and articulate what-if scenarios. The complex value of GARY is the exponential speed, security, and accuracy of its outputs to deliver responsible, mission-ready decision superiority. GARY is the readiness solution that the DoD needs right now.”

    CORAS is actively expanding its joint IL5 deployments and is positioned to extend capabilities into additional mission networks and environments. Our goal is to accelerate the adoption of secure, agentic AI across defense, intelligence, and allied operations. CORAS GARY is available for a 30-day free trial to current customers and to DoD members with a .mil address. Go to GARY.AI to experience jaw-dropping capability today.

    About CORAS: CORAS is the AI-powered decision intelligence platform trusted by leaders across the Department of Defense. Built for speed, security, and real-time execution, CORAS unifies data integration, agentic automation, and live decision support across IL5, NIPR, SIPR, FedRAMP-High, and other trusted DoD environments. Its platform and flagship Generative AI assistant, GARY, deliver operational clarity from portfolio to program to execution level. CORAS is available through GSA, NASA SEWP, SBIR Phase III, Tradewinds AI Marketplace, and partners including Carahsoft and AWS. For more information, visit www.coras.ai.

    Contact Information

    Rebecca Churchill
    Churchill Communications & Marketing, LLC
    rc@churchillcommunicationsllc.com
    917-518-9789

    Source: CORAS

    Source link

  • This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    Two college students have used Meta’s smart glasses to build a tool that quickly identifies any stranger walking by and brings up that person’s sensitive information, including their home address and contact information, according to a demonstration video posted to Instagram. And while the creators say they have no plans to release the code for their project, the demo gives us a peek at humanity’s very likely future—a future that used to be confined to dystopian sci-fi movies.

    The two people behind the project, AnhPhu Nguyen and Caine Ardayfio, are students working on computer science at Harvard who often post their tech experiments on social media, including 3D printed images and wearable flame-throwers. But it’s their latest experiment, first spotted by 404 Media, that’s probably going to make a lot of people feel uneasy.

    An Instagram video posted by Nguyen explains how the two men built a program that feeds the visual information from Meta Ray Ban smart glasses into facial recognition tools like Pimeyes, which have essentially scraped the entire web to identify where that person’s face shows up online. From there, a large language model infers the likely name and other details about that person. That name is then fed to various websites that can reveal the person’s home address, phone number, occupation or other organizational affiliations, and even the names of relatives.

    “To use it, you just put the glasses on, then as you walk by people, the glasses will detect when somebody’s face is in frame. This photo is used to analyze them, and after a few seconds, their personal information pops up on your phone,”  Nguyen explains in the Instagram video.

    Nguyen and Ardayfio call their project I-XRAY and it’s pretty stunning how much information they’re able to pull up in a short amount of time. They’re quick to point out that many of these tools have only become widely available in the past few years. For example, Meta’s smart glasses with camera capabilities that look like regular eyeglasses were only released last year. And the kind of LLM data extraction they’re achieving was only possible in the past two years. Even the ability to look up partial social security numbers (thanks to all those data leaks you read about every day now) was only possible at the consumer level since 2023.

    As you can see in the video, they also approached strangers and acted like they knew those people from elsewhere after instantly looking up their information.

    “The system leverages the ability of LLMs to understand, process, and compile vast amounts of information from diverse sources–inferring relationships between online sources, such as linking a name from one article to another, and logically parsing a person’s identity and personal details through text,” the creators say in an explanation document posted to Google Drive. “This synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction that was previously not possible with traditional methods alone.”

    The creators list the tools they used in their release, noting that anyone can request that those services remove their information. For reverse facial search engines, there’s Pimeyes and Facecheck ID. For search engines that include personal information there’s FastPeopleSearch, CheckThem, and Instant Checkmate. As for the social security number information, there’s no way to get that stuff removed, so the students recommend freezing your credit.

    The students didn’t immediately respond to questions from Gizmodo on Wednesday morning. Meta also didn’t respond to a request for comment. We’ll update this post if we hear back. But in the meantime, we should all probably get ready for this kind of tech to emerge more widely since this kind of technological mash-up feels inevitable at this point—especially if any of the new smart glasses that guys like Mark Zuckerberg love so much really become mainstream.

    It may take quite a while for the biggest tech companies to get behind it, but just as we saw OpenAI essentially shoot the starting gun for consumer-facing generative AI, any small upstart could plausibly make this product happen and start the dominoes falling for other larger tech companies to get this future started. Let’s cross our fingers and hope for the best, given the privacy implications. It really feels like nobody will have any semblance of anonymity in public once this ball gets rolling.

    Matt Novak

    Source link

  • WTF Fun Fact 13718 – Recreating the Holodeck

    WTF Fun Fact 13718 – Recreating the Holodeck

    Engineers from the University of Pennsylvania have generated a tool inspired by Star Trek’s Holodeck. It uses advances in AI to transform how we interact with digital spaces.

    The Power of Language in Creating Virtual Worlds

    In Star Trek, the Holodeck was a revolutionary concept, a room that could simulate any environment based on verbal commands. Today, that concept has moved closer to reality. The UPenn team has developed a system where users describe the environment they need, and AI brings it to life. This system relies heavily on large language models (LLMs), like ChatGPT. These models understand and process human language to create detailed virtual scenes.

    For example, if a user requests a “1b1b apartment for a researcher with a cat,” the AI breaks this down into actionable items. It designs the space, selects appropriate objects from a digital library, and arranges them realistically within the environment. This method simplifies the creation of virtual spaces and opens up possibilities for training AI in scenarios that mimic real-world complexity.

    The Holodeck-Inspired System

    Traditionally, virtual environments for AI training were crafted by artists, a time-consuming and limited process. Now, with the Holodeck-inspired system, millions of diverse and complex environments can be generated quickly and efficiently. This abundance of training data is crucial for developing ’embodied AI’, robots that understand and navigate our world.

    Just think of the practical indications. For example, robots can be trained in these virtual worlds to perform tasks ranging from household chores to complex industrial jobs before they ever interact with the real world. This training ensures that AI behaves as expected in real-life situations, reducing errors and improving efficiency.

    A Leap Forward in AI Training and Functionality

    The University of Pennsylvania’s project goes beyond generating simple spaces. It tests these environments with real AI systems to refine their ability to interact with and navigate these spaces. For instance, an AI trained in a virtual music room was significantly better at locating a piano compared to traditional training methods. This shows that AI can learn much more effectively in these dynamically generated environments.

    The project also highlights a shift in AI research focus to varied environments like stores, public spaces, and offices. By broadening the scope of training environments, AI can adapt to more complex and varied tasks.

    The connection between this groundbreaking AI technology and Star Trek’s Holodeck lies in the core concept of creating immersive, interactive 3D environments on demand. Just as the Holodeck allowed the crew of the U.S.S. Enterprise to step into any scenario crafted by their commands, this new system enables users to generate detailed virtual worlds through simple linguistic prompts.

    This technology mimics the Holodeck’s ability to create and manipulate spaces that are not only visually accurate but also interactable, providing a seamless blend of fiction and functionality that was once only imaginable in the realm of sci-fi.

     WTF fun facts

    Source: “Star Trek’s Holodeck recreated using ChatGPT and video game assets” — ScienceDaily

    WTF

    Source link

  • Nvidia’s new tool lets you run GenAI models on a PC | TechCrunch

    Nvidia’s new tool lets you run GenAI models on a PC | TechCrunch


    Nvidia, ever keen to incentivize purchases of its latest GPUs, is releasing a tool that lets owners of GeForce RTX 30 Series and 40 Series cards run an AI-powered chatbot offline on a Windows PC.

    Called Chat with RTX, the tool allows users to customize a GenAI model along the lines of OpenAI’s ChatGPT by connecting it to documents, files and notes that it can then query.

    “Rather than searching through notes or saved content, users can simply type queries,” Nvidia writes in a blog post. “For example, one could ask, ‘What was the restaurant my partner recommended while in Las Vegas?’ and Chat with RTX will scan local files the user points it to and provide the answer with context.”

    Chat with RTX defaults to AI startup Mistral’s open source model but supports other text-based models including Meta’s Llama 2. Nvidia warns that downloading all the necessary files will eat up a fair amount of storage — 50GB to 100GB, depending on the model(s) selected.

    Currently, Chat with RTX works with text, PDF, .doc and .docx and .xml formats. Pointing the app at a folder containing any supported files will load the files into the model’s fine-tuning data set. In addition, Chat with RTX can take the URL of a YouTube playlist to load transcriptions of the videos in the playlist, enabling whichever model’s selected to query their contents.

    Now, there’s certain limitations to keep in mind, which Nvidia to its credit outlines in a how-to guide.

    Chat with RTX

    Image Credits: Nvidia

    Chat with RTX can’t remember context, meaning that the app won’t take into account any previous questions when answering follow-up questions. For example, if you ask “What’s a common bird  in North America?” and follow that up with “What are its colors?,” Chat with RTX won’t know that you’re talking about birds.

    Nvidia also acknowledges that the relevance of the app’s responses can be affected by a range of factors, some easier to control for than others — including the question phrasing, the performance of the selected model and the size of the fine-tuning data set. Asking for facts covered in a couple of documents is likely to yield better
    results than asking for a summary of a document or set of documents. And response quality will generally improve with larger data sets — as will pointing Chat with RTX at more content about a specific subject, Nvidia says.

    So Chat with RTX is more a toy than anything to be used in production. Still, there’s something to be said for apps that make it easier to run AI models locally — which is something of a growing trend.

    In a recent report, the World Economic Forum predicted a “dramatic” growth in affordable devices that can run GenAI models offline, including PCs, smartphones, internet of things devices and networking equipment. The reasons, the WEF said, are the clear benefits: not only are offline models inherently more private — the data they process never leaves the device they run on — but they’re lower latency and more cost effective than cloud-hosted models.

    Of course, democratizing tools to run and train models opens the door to malicious actors — a cursory Google Search yields many listings for models fine-tuned on toxic content from unscrupulous corners of the web. But proponents of apps like Chat with RTX argue that the benefits outweigh the harms. We’ll have to wait and see.



    Kyle Wiggers

    Source link

  • Meta turned a blind eye to kids on its platforms for years, unredacted lawsuit alleges | TechCrunch

    Meta turned a blind eye to kids on its platforms for years, unredacted lawsuit alleges | TechCrunch

    A newly unredacted version of the multi-state lawsuit against Meta alleges a troubling pattern of deception and minimization in how the company handles kids under 13 on its platforms. Internal documents appear to show that the company’s approach to this ostensibly forbidden demographic is far more laissez-faire than it has publicly claimed.

    The lawsuit, filed last month, alleges a wide spread of damaging practices at the company relating to the health and well-being of younger people using it. From body image to bullying, privacy invasion to engagement maximization, all the purported evils of social media are laid at Meta’s door — perhaps rightly, but it also gives the appearance of a lack of focus.

    In one respect at least, however, the documentation obtained by the Attorneys General of 42 states is quite specific, “and it is damning,” as AG Rob Bonta of California put it. That is in paragraphs 642 through 835, which mostly document violations of the Children’s Online Privacy Protection Act, or COPPA. This law created very specific restrictions around young folks online, limiting data collection and requiring things like parental consent for various actions, but a lot of tech companies seem to consider it more suggestion than requirement.

    You know it is bad news for the company when they request pages and pages of redactions:

    Image Credits: TechCrunch / 42 AGs

    This recently happened with Amazon as well, and it turned out they were trying to hide the existence of a price-hiking algorithm that skimmed billions from consumers. But it’s much worse when you’re redacting COPPA complaints.

    “We’re very bullish and confident in our COPPA allegations. Meta is knowingly taking steps that harm children, and lying about it,” AG Bonta told TechCrunch in an interview. “In the unredacted complaint we see that Meta knows that it’s social media platforms are used by millions of kids under 13, and they unlawfully collect their personal info. It shows that common practice where Meta says one thing in its public facing comments to Congress and other regulators, while internally it says something else.”

    The lawsuit argues that “Meta does not obtain—or even attempt to obtain—verifiable parental consent before collecting the personal information of children on Instagram and Facebook… But Meta’s own records reveal that it has actual knowledge that
    Instagram and Facebook target and successfully enroll children as users.”

    Essentially, while the problem of identifying kids’ accounts created in violation of platform rules is certainly a difficult one, Meta allegedly opted to turn a blind eye for years rather than enact more stringent rules that would necessarily impact user numbers.

    Here are a few of the most striking parts of the suit. While some of these allegations relate to practices from years ago, bear in mind that Meta (then Facebook) has been publicly saying it doesn’t allow kids on the platform, and diligently worked to detect and expel them, for a decade.

    Meta has internally tracked and documented under-13s, or U13s, in its audience breakdowns for years, as charts in the filing show. In 2018, for instance, it noted that 20 percent of 12-year-olds on Instagram used it daily. And this was not in a presentation about how to remove them — it is relating to market penetration. The other chart shows Meta’s “knowledge that 20-60% of 11- to 13-year-old users in particular birth cohorts had actively used Instagram on at least a monthly basis.”

    The newly unredacted chart shows that Meta tracked under-13 users closely.

    It’s hard to square this with the public position that users this age are not welcome. And it isn’t because leadership wasn’t aware.

    That same year, 2018, CEO Mark Zuckerberg received a report that there were approximately 4 million people under 13 on Instagram in 2015, which amounted to about a third of all 10-12-year-olds in the U.S., they estimated. Those numbers are obviously dated, but even so they are surprising. Meta has never, to our knowledge, admitted to having such enormous numbers and proportions of under-13 users on its platforms.

    Not externally, at least. Internally, the numbers appear to be well documented. For instance, as the lawsuit alleges:

    Meta possesses data from 2020 indicating that, out of 3,989 children surveyed, 31% of child respondents aged 6-9 and 44% of child respondents aged 10 to 12-years-old had used Facebook.

    It’s difficult to extrapolate from the 2015 and 2020 numbers to today’s (which, as we have seen from the evidence presented here, will almost certainly not be the whole story), but Bonta noted that the large figures are presented for impact, not as legal justification.

    “The basic premise remains that their social media platforms are used by millions of children under 13. Whether it’s 30 percent, or 20 or 10 percent… any child, it’s illegal,” he said. “If they were doing it at any time, it violated the law at that time. And we are not confident that they have changed their ways.”

    An internal presentation called “2017 Teens Strategic Focus” appears to specifically target kids under 13, noting that children use tablets as early as 3 or 4, and “Social identity is an Unmet need Ages 5-11.” One stated goal, according to the lawsuit, was specifically to “grow [Monthly Active People], [Daily Active People] and time spent among U13 kids.”

    It’s important to note here that while Meta does not permit accounts to be run by people under 13, there are plenty of ways it can lawfully and safely engage with that demographic. Some kids just want to watch videos from Spongebob Official, and that’s fine. However, Meta must verify parental consent and the ways it can collect and use their data is limited.

    But the redactions suggest these under-13 users are not of the lawfully and safely engaged type. Reports of underage accounts are reported to be automatically ignored, and Meta “continues collecting the child’s personal information if there are no photos associated with the account.” Of 402,000 reports of accounts owned by users under 13 in 2021, fewer than 164,000 were disabled. And these actions reportedly don’t cross between platforms, meaning Instagram account being disabled doesn’t flag associated or linked Facebook or other accounts.

    Zuckerberg testified to Congress in March of 2021 that “if we detect someone might be under the age of 13, even if they lied, we kick them off.” (And “they lie about it a TON,” one research director said in another quote.) But documents from the next month cited by the lawsuit indicate that “Age verification (for under 13) has a big backlog and demand is outpacing supply” due to a “lack of [staffing] capacity.” How big a backlog? At times, the lawsuit alleges, on the order of millions of accounts.

    A potential smoking gun is found in a series of anecdotes from Meta researchers delicately avoiding the possibility of inadvertently confirming an under-13 cohort in their work.

    One wrote in 2018: “We just want to make sure to be sensitive about a couple of Instagram-specific items. For example, will the survey go to under 13 year olds? Since everyone needs to be at least 13 years old before they create an account, we want to be careful about sharing findings that come back and point to under 13 year olds being bullied on the platform.”

    In 2021, another, studying “child-adult sexual-related content/behavior/interactions” (!) said she was “not includ[ing] younger kids (10-12 yos) in this research” even though there “are definitely kids this age on IG,” because she was “concerned about risks of disclosure since they aren’t supposed to be on IG at all.”

    Also in 2021, Meta instructed a third party research company conducting a survey of preteens to remove any information indicating a survey subject was on Instagram, so the “company won’t be made aware of under 13.”

    Later that year, external researchers provided Meta with information that “of children ages 9-12, 45% used Facebook and 40% used Instagram daily.”

    During an internal 2021 study on youth in social media, they first asked parents if their kids are on Meta platforms and removed them from the study if so. But one researcher asked, “What happens to kids who slip through the screener and then say they are on IG during the interviews?” Instagram Head of Public Policy Karina Newton responded, “we’re not collecting user names right?” In other words, what happens is nothing.

    As the lawsuit puts it:

    Even when Meta learns of specific children on Instagram through interviews with the children, Meta takes the position that it still lacks actual knowledge of that it is collecting personal information from an under-13 user because it does not collect user names while conducting these interviews. In this way, Meta goes through great lengths to avoid meaningfully complying with COPPA, looking for loopholes to excuse its knowledge of users under the age of 13 and maintain their presence on the Platform.

    The other complaints in the lengthy lawsuit have softer edges, such as the argument that use of the platforms contributes to poor body image and that Meta has failed to take appropriate measures. That’s arguably not as actionable. But the COPPA stuff is far more cut and dry.

    “We have evidence that parents are sending notes to them about their kids being on their platform, and they’re not getting any action. I mean, what more should you need? It shouldn’t even have to get to that point,” Bonta said.

    “These social media platforms can do anything they want,” he continued. “They can be operated by a different algorithm, they can have plastic surgery filters or not have them, they can give you alerts in the middle of the night or during school, or not. They choose to do things that maximize the frequency of use of that platform by children, and the duration of that use. They could end all this today if they wanted, they could easily keep those under 13 from accessing their platform. But they’re not.”

    You can read the mostly unredacted complaint here.

    TechCrunch has contacted Meta for comment on the lawsuit and some of these specific allegations, and will update this post if we hear back.

    Devin Coldewey

    Source link