ReportWire

Tag: openai

  • OpenAI Researcher: Students Should Still Learn to Code | Entrepreneur

    [ad_1]

    An OpenAI staff member is clearing up the “misinformation” online and telling high school students that they should “absolutely learn to code.”

    On an episode of the OpenAI podcast last week, OpenAI researcher Szymon Sidor noted that high school students still gain benefits from learning programming, even though AI coding tools like ChatGPT and Cursor automate the process.

    Learning to code helps students develop problem-solving and critical-thinking skills, Sidor said. He noted that even if programming becomes obsolete in the future, it is still a viable way to cultivate the skill of breaking down problems and solving them.

    Related: Perplexity CEO Says AI Coding Tools Cut Work Time From ‘Four Days to Literally One Hour’

    “One skill that is at premium, and will continue being at premium, is to have a really structured intellect that can break complicated problems into pieces,” Sidor said on the podcast. “That might not be programming in the future, but programming is a fine way to acquire that skill. So are other kinds of domains where you need to think a lot.”

    Podcast host Andrew Mayne, who was previously OpenAI’s chief science communicator, agreed with Sidor. Mayne stated that he learned to code “later in life” and found it to be a useful foundation in interacting with AI to engineer precise prompts.

    “Whenever I hear people say, ‘Don’t learn to code,’ it’s like, do I want an airplane pilot who doesn’t understand aerodynamics?” Mayne said on the podcast. “This doesn’t make much sense to me.”

    Though Mayne and Sidor may believe that learning to code is foundational and recommend it to high school students, another AI leader presents a contrasting viewpoint. Jensen Huang, the CEO of Nvidia, the most valuable company in the world, said in June that AI equalizes the technological playing field and allows anyone to write code simply by prompting an AI bot in natural language.

    Instead of learning Python or C++, users can just ask AI to write a program, Huang explained.

    Related: AI Will Create More Millionaires in the Next 5 Years Than the Internet Did in 2 Decades, According to Nvidia’s CEO

    Big Tech companies are increasingly turning to AI to generate new code, instead of having human engineers manually write it.

    In April, Google CEO Sundar Pichai said that staff members were tapping into AI to write “well over 30%” of new code at Google, higher than 25% recorded in October. In the same month, Microsoft CEO Satya Nadella stated that engineers are using AI to write up to 30% of code for company projects.

    Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.

    An OpenAI staff member is clearing up the “misinformation” online and telling high school students that they should “absolutely learn to code.”

    On an episode of the OpenAI podcast last week, OpenAI researcher Szymon Sidor noted that high school students still gain benefits from learning programming, even though AI coding tools like ChatGPT and Cursor automate the process.

    Learning to code helps students develop problem-solving and critical-thinking skills, Sidor said. He noted that even if programming becomes obsolete in the future, it is still a viable way to cultivate the skill of breaking down problems and solving them.

    The rest of this article is locked.

    Join Entrepreneur+ today for access.

    [ad_2]

    Sherin Shibu

    Source link

  • After Tension With Washington, Intel Is Suddenly a Hot Asset

    [ad_1]

    Earlier this month, President Donald Trump publicly called on Intel CEO Lip-Bu Tan to resign. Photo by Andrej Sokolow/picture alliance via Getty Images

    In its latest push into A.I. and semiconductors, SoftBank yesterday (Aug. 18) announced a $2 billion investment in Intel. The Masayoshi Son-led conglomerate purchased shares at a slight discount—$23 each—giving it about a 2 percent stake in the struggling U.S. chipmaker.

    “For more than 50 years, Intel has been a trusted leader in innovation,” said Son in a statement. “This strategic investment reflects our belief that advanced semiconductor manufacturing and supply will further expand in the U.S., with Intel playing a critical role.”

    SoftBank, long known for its bold bets, has been particularly aggressive in A.I. It has backed A.I. startups like Perplexity AI and OpenAI, leading a $40 billion funding round for the latter that valued the ChatGPT maker at $300 billion earlier this year. In January, SoftBank also joined OpenAI, Oracle, and others in launching Stargate, a $500 billion initiative aimed at boosting domestic A.I. development over the next four years.

    On the semiconductor front, SoftBank is the majority owner of chip designer Arm and last year acquired Graphcore to position it as a Nvidia rival.The company previously held around 5 percent of Nvidia but sold its stake in 2019, just before the A.I. boom sent the chipmaker’s value soaring. SoftBank has since rebuilt its Nvidia holdings to around $3 billion.

    While surging demand for A.I. chips has made Nvidia the world’s most valuable publicly listed company, Intel has struggled to capitalize on the boom. Once a leader in semiconductor manufacturing, the Santa Clara, Calif-based company has fallen behind rivals in areas like GPUs. After SoftBank revealed its investment, its own shares dropped more than 7 percent today, while Intel shares jumped 7 percent on the news.

    The U.S. eyes a stake in Intel

    Another force bolstering Intel’s share price today is reports that the U.S. government is considering a 10 percent stake in the company. The government is considering converting funds that Intel was supposed to get under the Biden-era Chips and Science Act into an equity stake, U.S. Commerce Secretary Howard Lutnick told CNBC today.

    The move would add a new twist to the tumultuous relationship between Washington and the semiconductor industry. Earlier this month, President Donald Trump publicly called on Intel CEO Lip-Bu Tan to resign, citing alleged conflicts of interest—a demand he walked back after meeting Tan at the White House last week. In August, the administration also announced that Nvidia and AMD could resume exporting chips to China, but only if they pay the U.S. 15 percent of revenue from those sales.

    Tan, who took over as Intel’s chief executive in March, is focused on catching up with competitors by emphasizing engineering, cutting costs and laying off about 25,000 employees throughout 2025. A veteran of the semiconductor industry, Tan has close ties to Son, having previously served on SoftBank’s board until 2022.

    “We are pleased to deepen our relationship with SoftBank, a company that’s at the forefront of so many areas of emerging technology and innovation and shares our commitment to advancing U.S. technology and manufacturing leadership,” said Tan in a statement. “Masa and I have worked closely together for decades, and I appreciate the confidence he has placed in Intel with this investment.”

    After Tension With Washington, Intel Is Suddenly a Hot Asset

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Sam Altman’s AI paradox: Warning of a bubble while raising trillions

    [ad_1]

    Welcome to Eye on AI! AI reporter Sharon Goldman here, filling in for Jeremy Kahn. In this edition… Sam Altman’s AI paradox…AI has quietly become a fixture of advertising…Silicon Valley’s AI deals are creating zombie startupssources say Nvidia working on new AI chip for China that outperforms the H20.

    I was not invited to Sam Altman’s cozy dinner with reporters in San Francisco last week (whomp whomp), but maybe that’s for the best. I have trouble suppressing exasperated eye rolls when I hear peak Silicon Valley–ironic statements.

    I am not sure I could have controlled myself when the OpenAI CEO said that he believes AI could be in a “bubble,” with market conditions similar to the 1990s dotcom boom. Yes, he reportedly said, “investors as a whole are overexcited about AI.” 

    Yet, over the same meal, Altman also apparently said he expects OpenAI to spend trillions of dollars on its data center buildout in the “not very distant future,” adding that “you should expect a bunch of economists wringing their hands, saying, ‘This is so crazy, it’s so reckless,’ and we’ll just be like, ‘You know what? Let us do our thing.’”

    Ummm…what could be more frothy than pitching a multi-trillion-dollar expansion in an industry you’ve just called a bubble? Cue an eye roll reaching the top of my head. Sure, Altman may have been referring to smaller AI startups with sky-high valuations and little to no revenue, but still, the irony is rich. It’s particularly notable given the weak GPT-5 rollout earlier this month, which was supposed to mark a leap forward but instead left many disappointed with its routing system and lack of breakthrough progress.

    In addition, even as Altman speaks of bubbles, OpenAI itself is raising record sums. In early August, OpenAI secured a whopping $8.3 billion in new funding at a $300 billion valuation—part of its plan to raise $40 billion this year. That figure was five times oversubscribed. On top of that, employees are now poised to sell about $6 billion in shares to investors like SoftBank, Dragoneer, and Thrive, pushing the company’s valuation potentially up to $500 billion.

    OpenAI is hardly an outlier in its infrastructure binge. Tech giants are pouring unprecedented sums into AI buildouts in 2025: Microsoft alone plans to spend $80 billion on AI data centers this fiscal year, while Meta is projecting up to $72 billion in AI and infrastructure investments. And on the fundraising front, OpenAI has company too — rivals like Anthropic are chasing multibillion-dollar rounds of their own. 

    Wall Street’s biggest bulls, like Wedbush’s Dan Ives, seem unconcerned. Ives said Monday on CNBC’s “Closing Bell” that demand for AI infrastructure has grown 30% to 40% in the last months, calling the capex surge a validation moment for the sector. While he acknowledged “some froth” in parts of the market, he said the AI revolution with autonomous systems is only starting to play out and we are in the “second inning of a nine-inning game.” 

    And while a bubble implies an eventual bursting, and all the damage that results, the underlying phenomenon causing a bubble often has real value. The advent of the web in the ’90s was revolutionary; The bubble was a reflection of the massive opportunities opening up.

    Still, I’d be curious if anyone pressed Altman on the AI paradox—warning of a bubble while simultaneously bragging about OpenAI’s massive fundraising and spending. Perhaps over a glass of bubbly and a sugary sweet dessert? I’d also love to know if he fielded tougher questions on the other big issues looming over the company: its shift to a public benefit corporation (and what that means for the nonprofit), the current state of its Microsoft partnership, and whether its mission of “AGI to benefit all of humanity” still holds now that Altman himself has said AGI “is not a super-useful term.”

    In any case, I’m game for a follow-up chat with Altman & Co (call me!). I’ll bring the bubbly, pop the questions, and do my best to keep the eye rolls at bay.

    Also: In just a few weeks, I will be headed to Park City, Utah, to participate in our annual Brainstorm Tech conference at the Montage Deer Valley! Space is limited, so if you’re interested in joining me, register here. I highly recommend: There’s a fantastic lineup of speakers, including Ashley Kramer, chief revenue officer of OpenAI; John Furner, president and CEO of Walmart U.S.; Tony Xu, founder and CEO of DoorDash; and many, many more!

    With that, here’s more AI news.

    Sharon Goldman
    sharon.goldman@fortune.com
    @sharongoldman

    FORTUNE ON AI

    Wall Street isn’t worried about an AI bubble. Sam Altman is – by Beatrice Nolan

    MIT report: 95% of generative AI pilots at companies are failing – by Sheryl Estrada

    Silicon Valley talent keeps getting recycled, so this CEO uses a ‘moneyball’ approach for uncovering hidden AI geniuses in the new era – by Sydney Lake

    Waymo experimenting with generative AI, but exec says LiDAR and radar sensors important to self-driving safety ‘under all conditions’ – by Jessica Matthews

    AI IN THE NEWS

    More shakeups for Meta AI. The New York Times reported today that Meta is expected to announce that it will split its A.I. division — which is known as Meta Superintelligence Labs — into four groups. One will focus on AI research; one on  “superintelligence”; another on products; and one on infrastructure such as data centers. According to the article’s anonymous sources, the reorganization “is likely to be the final one for some time,” with moves “aimed at better organizing Meta so it can get to its goal of superintelligence and develop AI products more quickly to compete with others.” The news comes less than two months after CEO Mark Zuckerberg overhauled Meta’s entire AI organization, including bringing on Scale AI CEO Alexandr Wang as chief AI officer. 

    Madison Avenue is starting to love AI. According to the New York Times, artificial intelligence has quietly become a fixture of advertising. What felt novel when Coca-Cola released an AI-generated holiday ad last year is now mainstream: nearly 90% of big-budget marketers are already using—or planning to use—generative AI in video ads. From hyper-realistic backdrops to synthetic voice-overs, the technology is slashing costs and production times, opening TV spots to smaller businesses for the first time. Companies like Shuttlerock and ITV are helping brands replace weeks of work with hours, while tech giants like Meta and TikTok push their own AI ad tools. The shift raises ethical questions about displacing creatives and fooling viewers, but industry leaders say the genie is out of the bottle: AI isn’t just streamlining ad production—it’s reshaping the entire commercial playbook.

    Silicon Valley’s AI deals are creating zombie startups: ‘You hollowed out the organization.’ According to CNBCSilicon Valley’s AI startup scene is being hollowed out as Big Tech sidesteps antitrust rules with a new playbook: licensing deals and talent raids that gut promising young companies. Windsurf, once in talks to be acquired by OpenAI, collapsed into turmoil after its founders bolted to Google in a $2.4 billion licensing pact; interim CEO Jeff Wang described tearful all-hands meetings as employees realized they’d been left with “nothing.” Similar moves have seen Meta sink $14.3 billion into Scale AI, Microsoft scoop up Inflection’s founders, and Amazon strip talent from Adept and Covariant—leaving behind so-called “zombie companies” with little future. While founders and top researchers cash out, investors and rank-and-file staff are often left stranded, sparking growing concern that these quasi-acquisitions not only skirt regulators but also threaten to choke off AI innovation at its source.

    Nvidia working on new AI chip for China that outperforms the H20, sources say. According to ReutersNvidia is developing a new China-specific AI chip, codenamed B30A, based on its cutting-edge Blackwell architecture. The chip, which could be delivered to Chinese clients for testing as soon as next month, would be more powerful than the current H20 but still fall below U.S. export thresholds—using a single-die design with about half the raw computing power of Nvidia’s flagship B300. The move comes after President Trump signaled possible approval for scaled-down chip sales to China, though regulatory approval is uncertain amid bipartisan concerns in Washington over giving Beijing access to advanced AI hardware. Nvidia argues that retaining Chinese buyers is crucial to prevent defections to domestic rivals like Huawei, even as Chinese regulators cast suspicion on the company’s products.

    EYE ON AI RESEARCH

    Study finds AI-led interviews improved outcomes. A new study looked at what happens when job interviews are run by AI voice agents instead of human recruiters. In a large experiment with 70,000 applicants, people were randomly assigned to be interviewed by a person, by an AI, or given the choice. Surprisingly, AI-led interviews actually improved outcomes: applicants interviewed by AI were 12% more likely to get job offers, 18% more likely to start jobs, and 17% more likely to still be employed after 30 days. Most applicants didn’t mind the change—78% even chose the AI when given the option, especially those with lower test scores. The AI also pulled out more useful information from candidates, leading recruiters to rate those interviews higher. Overall, the study shows that AI interviewers can perform just as well as, or even better than, human recruiters—without hurting applicant satisfaction.

    AI CALENDAR

    Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.

    Oct. 6-10: World AI Week, Amsterdam

    Oct. 21-22: TedAI San Francisco. Apply to attend here.

    Dec. 2-7: NeurIPS, San Diego

    Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

    BRAIN FOOD

    Do AI chatbots need to be protected from harm? 

    AI lab Anthropic has introduced a new safety measure in its latest Claude models, which empowers the AI to terminate conversations in extreme cases of harmful or abusive interaction. The feature activates only after repeated redirections fail—typically for content requests involving sexual exploitation of minors or facilitation of large-scale violence. The company is notably framing this as a safeguard not principally for users, but for the model’s own “AI welfare,” reflecting an exploratory stance on the machine’s potential moral status.

    Unsurprisingly, the idea of granting AI moral status is contentious. Jonathan Birch, a philosophy professor at the London School of Economics, told The Guardian he welcomed Anthropic’s move for sparking a public debate about AI sentience—a topic he said many in the industry would rather suppress. At the same time, he warned that the decision risks misleading users into believing the chatbot is more real than it is.

    Others argue that focusing on AI welfare distracts from urgent human concerns. For example, while Claude is designed to end only the most extreme abusive conversations, it will not intervene in cases of imminent self-harm—even though a New York Times opinion piece yesterday urged such safeguards, written by a mother who discovered her daughter’s ChatGPT conversations only after her daughter’s suicide.

    [ad_2]

    Sharon Goldman

    Source link

  • OpenAI launches a sub $5 ChatGPT plan in India | TechCrunch

    [ad_1]

    OpenAI today launched a new, cheaper ChatGPT paid subscription plan in India called ChatGPT GO, priced at ₹399 per month ($4.60), which is more affordable than the ₹1,999 (about $23) per month Plus Plan.

    The company had turned on local currency pricing for all its plans a few days ago, and with this launch, it will also allow users to pay through UPI (Unified Payment Interface), India’s payment framework.

    Nick Turley, VP at OpenAI and head of ChatGPT, said that this plan will increase the message, image generation, and file uploads by 10 times over the free tier. The ChatGPT Go plan will also enable better memory retention for more personalized responses, Turely said.

    “Making ChatGPT more affordable has been a key ask from users! We’re rolling out Go in India first and will learn from feedback before expanding to other countries,” Turley said.

    From a currency conversion standpoint, the Plus plan was higher than $20 for Indian users when offered in local currency. The new Go plan offers a more affordable alternative to people who are looking to use ChatGPT primarily for chat, image generation, and file processing.

    Tibor Blaho, a software engineer with a reputation for accurately leaking upcoming AI products, had previously teased this plan and its details.

    While the company is geo-restricting this plan to India, the company said on its support page that it is working to expand this plan to more regions.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Last month, Turley noted that ChatGPT now has more than 700 million weekly users worldwide — up from 500 million in March. OpenAI launched its updated image generator feature for ChatGPT in March, and since then, it has seen an uptick in usage in India. OpenAI CEO Sam Altman said in a recent podcast that India is the company’s second biggest market. With the Go plan, it wants to cash in on that.

    According to app analytics firm AppFigures, India has been the leading country in terms of ChatGPT app downloads across platforms, with over 29 million downloads coming from the country in the last 90 days. However, the app only made $3.6 million in this period from users in the country.

    This move will likely nudge more consumers to subscribe to using ChatGPT more, given its pricing. Other AI companies have also made moves to attract users from the country’s internet user base of over 850 million. Last month, Perplexity partnered with network provider Airtel to offer free Perplexity Pro subscriptions. Google also dished up a free AI Pro plan for India-based students for one year.

    While OpenAI’s move is not giving out any freebies, local and affordable pricing will likely result in a better subscription conversion rate for ChatGPT in India.

    [ad_2]

    Ivan Mehta

    Source link

  • There’s one bright spot for San Francisco’s office space market

    [ad_1]

    In recent years, San Francisco’s image as a welcoming place for businesses has taken a hit.

    Major tech companies such as Dropbox and Salesforce reduced footprints in the city by subleasing office space, while retailers including Nordstrom and Anthropologie pulled out of downtown. Social media firm X, formerly Twitter, vacated its Mid-Market headquarters for Texas, after owner Elon Musk complained about “dodging gangs of violent drug addicts just to get in and out of the building.”

    While the city remains on the defensive, one bright spot has been a boom in artificial intelligence startups.

    San Francisco’s 35.4% vacancy rate in the first quarter — among the highest in the nation — is expected to drop one to three percentage points in the third quarter thanks to AI companies expanding or opening new offices in the city, according to real estate brokerage firm JLL. The last time San Francisco’s vacancy rate dropped was in the fourth quarter, when it declined 0.2% — the first time since the COVID-19 pandemic, according to JLL.

    “People wanted to count us out, and I think that was a bad bet,” said Mayor Daniel Lurie. “We’re seeing all of this because the ecosystem is better here in San Francisco than anywhere else in the world, and it’s really an exciting time.”

    Five years ago, AI leases in San Francisco’s commercial real estate market were relatively sparse, with just two leases in 2020, according to JLL. But that’s since soared to 167 leases in the first quarter of 2025. The office footprint for AI companies has also surged, making up 4.8 million square feet in 2024, up from 2.6 million in 2022, JLL said.

    “You need the talent base, you need the entrepreneur ecosystem, and you need the VC ecosystem,” said Alexander Quinn, senior director of economic research for JLL’s Northwest region. “So all those three things exist within the greater Bay Area, and that enables us to be the clear leader.”

    AI firms are attracted to San Francisco because of the concentration of talent in the city, analysts said. The city is home to AI companies including ChatGPT maker OpenAI and Anthropic, known for the chatbot Claude, which in turn attract businesses that want to collaborate. The Bay Area is also home to universities that attract entrepreneurs and researchers, including UC Berkeley, UC San Francisco and Stanford University.

    Venture capital companies are pouring money into AI, fueling office and staff growth. OpenAI landed last quarter the world’s largest venture capital deal, raising $40 billion, according to research firm CB Insights.

    OpenAI leases about 1 million square feet of space across five different locations in the city and employs roughly 2,000 people in San Francisco. The company earlier this year opened its new headquarters in Mission Bay, leasing the space from Uber.

    OpenAI began as a nonprofit research lab in 2015 and the people involved found their way to San Francisco for the same reason why earlier generations of technologists and people pushing the frontier in the United States are drawn to the city, said Chris Lehane, OpenAI’s vice president of global affairs in an interview.

    “It is a place where, when you put out an idea, no matter how crazy it may seem at the time, or how unorthodox it may seem … San Francisco is the city where people don’t say, ‘That’s crazy,’” Lehane said. “They say, ‘That’s a really interesting idea. Let’s see if we can do it.’”

    The interior of OpenAI's new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)
    The interior of OpenAI's new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)

    The interior of OpenAI’s new San Francisco headquarters in the Mission Bay neighborhood. (OpenAI)

    Databricks, valued at $62 billion, is also expanding in San Francisco. Databricks in March announced it will move to a larger space in the Financial District next year, boosting its office footprint to 150,000 square feet and more than doubling its San Francisco staff in the next two years. It pledged to hold its annual Data + AI Summit in the city for five more years.

    The company holds 57,934 square feet at its current San Francisco office near the Embarcadero, according to CoStar, which tracks real estate trends.

    “San Francisco is a real talent magnet for AI talent,” said Databricks’ co-founder and vice president of engineering Patrick Wendell. “It’s a beautiful city for people to live and work in and so we really are just following where the employees are.”

    Several years ago, Wendell said his company was considering whether to expand in San Francisco. At the time, it was unclear whether people would return to offices after the pandemic, and some businesses raised concerns about safety and cleanliness of San Francisco’s streets. Wendell said his company decided to invest more in the city after getting reassurances from city leaders.

    “People are seeing an administration that is focused on public safety, clean streets and creating the conditions that also says that we’re open for business,” said Lurie, who defeated incumbent mayor London Breed last November by campaigning on public safety. “We’ve said from day one, we have to create the conditions for our arts and culture, for our small businesses and for our innovators and our entrepreneurs to thrive here.”

    Laurel Arvanitidis, director of business development for San Francisco’s Office of Economic and Workforce Development, said that the city’s policy and tax reforms have helped attract and retain businesses in recent years, including an office tax credit that gives up to a $1-million credit for businesses that are new or relocating to San Francisco.

    On Thursday, Lurie announced on social media that cryptocurrency exchange Coinbase is opening an office in San Francisco after leaving the city four years ago.

    “We are excited to reopen an office in SF,” Coinbase Chief Executive Brian Armstrong wrote in response to the mayor’s social media post. “Still lots of work to do to improve the city (it was so badly run for many years) but your excellent work has not gone unnoticed, and we greatly appreciate it.”

    Santa Clara-based Nvidia is also looking for San Francisco office space, according to a person familiar with the matter who declined to be named. The news was first reported by the San Francisco Chronicle. Nvidia, which also has California offices in San Dimas and Sunnyvale, declined to comment.

    “It’s because of AI that San Francisco is back,” Nvidia Chief Executive Jensen Huang said last month on the Hill & Valley Forum podcast. “Just about everybody evacuated San Francisco. Now it’s thriving again.”

    But San Francisco still has challenges ahead, as companies continue to push workers to return to the office. While the street environment has improved, it will be critical for the city to keep up the progress.

    Lurie said his administration inherited the largest budget deficit in the city’s history and they have to get that under control. His administration’s task is to make sure streets and public spaces are clean, safe and inviting, he said.

    “We have work to do, there’s no question, but we are a city on the rise, that’s for sure,” Lurie said.

    Times staff writer Roger Vincent contributed to this report.

    [ad_2]

    Wendy Lee

    Source link

  • iAsk Surpasses Half a Billion Searches

    [ad_1]

    iAsk.Ai is Becoming One of the Fastest-Growing AI Answer Engines

    iAsk’s groundbreaking answer engine, iAsk Ai , has reached a significant milestone: over half a billion searches processed .
    The Ask AI platform currently processes over 1.5 million searches daily, demonstrating explosive growth as users turn to novel AI Answer Engines.

    Since its launch, iAsk has been transforming how people access information by providing instant, factual, and highly relevant direct answers without the clutter of traditional search engines. Unlike conventional search engines that rely on keyword-based ranking and paid advertisements, iAsk leverages transformer neural networks to deliver precise, well-sourced responses that align with user intent.

    As users become increasingly frustrated with outdated or irrelevant search results, iAsk has emerged as the preferred choice for students, young professionals, and researchers who require reliable answers. The platform eliminates the need to sift through pages of search results by providing direct answers using the most trustworthy information.

    Surpassing half a billion searches is a testament to how people are shifting away from traditional search engines and embracing AI-powered solutions ,” said iAsk’s CEO and co-founder Dominik Mazur . “ Users want truthful and unbiased answers without the frustration of multiple search attempts or sifting through various websites.

    For example, a search for “ best productivity hacks for students ” doesn’t just return a list of web pages and sponsored links. Instead, iAsk analyzes the most authoritative sources, real-world data, the live internet, and current trends to generate well-researched, fact-based answers with related images, videos, and searches.

    With rapid user adoption and growing momentum, iAsk is on track to reach a billion searches in 2025 . The company’s mission is to provide truthful answers to any questions users ask, saving thousands of hours of research.

    We are committed to making information easily understandable for everyone ,” Mazur added. “ The growth we’ve seen has been organic, driven primarily by word of mouth, and that speaks volumes about the demand for a smart AI answer engine that increases productivity.

    As AI technology advances, iAsk focuses on improving its answer relevance, user experience, and a suite of AI productivity tools. With its expanding user base and ongoing AI enhancements, the platform is set to play a leading role in the future of AI search.

    For more information, visit Ask AI .

    Contact Information

    Rahul Srivathsa
    Marketing Manager
    rahul@iask.ai
    888-765-4564

    Phillip DeRenzo
    Head of Marketing at iAsk
    phillip@iask.ai
    888-765-4564

    Source: AI SEARCH INC

    [ad_2]

    Source link

  • Andreessen Horowitz Founders Notice A.I. Models Are Hitting a Ceiling

    [ad_1]

    The investment firm was founded by Ben Horowitz and Marc Andreessen in 2009. Photos by Phillip Faraone/Getty Images for WIRED and Paul Chinn/The San Francisco Chronicle via Getty Images

    Despite continuing to bet big on A.I. startups and chip programs, the founders of the venture capital firm Andreessen Horowitz say they’ve noticed a drop off in A.I. model capability improvements in recent years. Two years ago, OpenAI’s GPT-3.5 model was “way ahead of everybody else’s,” said Marc Andreessen, who co-founded Andreessen Horowitz alongside Ben Horowitz in 2009, on a podcast released yesterday (Nov. 5). “Sitting here today, there’s six that are on par with that. They’re sort of hitting the same ceiling on capabilities,” he added.

    That’s not to say the investment firm doesn’t have faith in the new technology. One of the most aggressive investors in the A.I. space, Andreessen Horowitz earlier this year earmarked $2.25 billion in funding for A.I.-focused applications and infrastructure and has led investments in notable companies including Mistral AI, a French startup founded by former DeepMind and Meta (META) researchers, and Air Space Intelligence, an aerospace company using A.I. to enhance air travel.

    Despite their embrace of the new technology, Andreessen and Horowitz concede there are growth limitations. In the case of OpenAI’s models, the difference in capability growth between its GPT-2.0, GPT-3 and GPT-3.5 models compared to the difference between GPT-3.5 and GPT-4 show that “we’ve really slowed down in terms of the amount of improvement,” said Horowitz.

    One of the primary challenges for A.I. developers has been a global shortage of graphics processing units (GPUs), the chips that power A.I. models. OpenAI CEO Sam Altman last week cited needs to allocate compute as causing the company to “face a lot of limitations and hard decisions” about what projects they focus on. Nvidia, the leading GPU maker, has previously described the shortage as making clients “tense” and “emotional.”

    In response to this demand, Andreessen Horowitz recently established a chip-lending program that provides GPUs to its portfolio companies in exchange for equity. The firm reportedly has been working on building a stockpile chip cluster of 20,000 GPUs, including Nvidia’s. However, chips aren’t the only aspect of compute that is of concern, according to Horowitz, who pointed to the need for more powering and cooling across the data centers housing GPUs. “Once they get chips we’re not going to have enough power, and once we have the power we’re not going to have enough cooling,” he said on yesterday’s podcast.

    But compute needs might not actually be the largest barrier when it comes to improving A.I. model capabilities, according to the venture capital firm. It’s the availability of training data needed to teach A.I. models how to behave that is increasingly becoming a problem. “The big models are trained by scraping the internet and pulling in all human-generated training data, all-human generated text and increasingly video and audio and everything else, and there’s just literally only so much of that,” said Andreessen.

    Between April of 2024 and 2023, 5 percent of all data and 25 percent of data from the highest quality sources was restricted by websites cracking down on the use of their text, images and videos in training A.I., according to a recent study from the Data Provenance Initiative.

    The issue has become so large that major A.I. labs are “hiring thousands of programmers and doctors and lawyers to actually handwrite answers to questions for the purpose of being able to train their A.I.’s—it’s at that level of constraint,” added Andreessen. OpenAI, for example, has a “Human Data Team” that works with A.I. trainers on gathering specialized data to train and evaluate models. And numerous A.I. companies have begun working with startups like Scale AI and Invisible Tech that hire human experts with specialized knowledge across medicine, law and other areas to help fine-tune A.I. model answers.

    Such practices fly in the face of fears relating to A.I.-driven unemployment, according to Andreessen, who noted that the dwindling supply of data has led to an unexpected A.I. hiring boom to help train models. “There’s an irony to this.”

    Andreessen Horowitz Founders Notice A.I. Models Are Hitting a Ceiling

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • Sam Altman’s Eye-Scanning Orb Has a New Look—and Will Come Right to Your Door

    Sam Altman’s Eye-Scanning Orb Has a New Look—and Will Come Right to Your Door

    [ad_1]

    While the biometric-scanning Orb and the World network have their roots in crypto tokens, “crypto” wasn’t an oft-mentioned word during the event. Instead, Altman and Blania emphasized World’s blockchain service, digital asset management, and virtual communication tools.

    Blania claimed during the press briefing that, in the future, World hopes to build the “largest finance network” on the planet.

    In a separate interview with WIRED, Blania said that during regular Sunday meetings at Atlman’s house, the pair were inspired by the rise of PayPal. Similar to the way that Peter Thiel, Max Levchin, and others once pioneered digital payments and fundamentally changed online commerce—becoming billionaires themselves in the process—the World team saw themselves building out a similar network for tokens on a distributed network.

    The World app, for now, is free for everyone to use. It’s free to scan your eyeballs, too. Tools for Humanity itself is venture-backed, and the foundation, in its land grab for the modern identity verification market and your personal biometric data, is focused on scale, scale, scale. Eventually, it may make money through processing fees, Blania said.

    Most of Tools for Humanity’s expansion plans for now are in locations outside of the US, due to murky regulations around crypto stateside, the organization’s spokesperson told me.

    If you use the Orb and compatible app in the US, it will scan and store your iris but won’t generate a crypto token for you.

    Two and a half years ago, the Worldcoin project came under scrutiny for allegedly deceptive and exploitative practices in recruiting individuals to scan their irises. At the time, Blania attributed this haphazard behavior to the organization still being in its “startup” phase. In an interview with WIRED, Blania said the company is doing “like, a thousand things” to ensure a more rigorous consent process. This includes staffing an “operational team” in every market where World will be. He said there will be “explanations” in the World app for how the product works.

    “And again, there is no data stored in any central place or anything,” Blania said.

    In 2023, the service was also being investigated by governments in Germany, Brazil, India, South Korea, and Kenya over concerns about how it was storing and using biometric data. Kenya suspended Worldcoin enrollment entirely. South Korea fined the company. Worldcoin suspended its own service in India, Brazil, and France.

    Blania said he believes World will relaunch in Kenya “sometime soon.”

    When asked in the press briefing about the emphasis on Latin America as a market for expansion, such as through the partnership with Rappi for orbs-on-delivery, Blania disputed the idea that World was prioritizing Latin America over other locations.

    “It’s just that we have limited resources, and there’s a natural sequencing happening,” Blania said. “We are similarly focused on Asia and other places. Argentina has been a fast-growing market for us, for example, and we’re excited about that.”

    “But the project is literally called World,” he added.

    After the keynote, Altman ran into the press room to wave and apologize for not being able to stay, then slipped away like a head of state.

    [ad_2]

    Lauren Goode

    Source link

  • Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be

    Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be

    [ad_1]

    For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

    The fragility highlighted in these new results helps support previous research suggesting that LLMs’ use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”

    Mix It Up

    In “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models”—currently available as a preprint paper—the six Apple researchers start with GSM8K’s standardized set of more than 8,000 grade-school level mathematical word problems, which is often used as a benchmark for modern LLMs’ complex reasoning capabilities. They then take the novel approach of modifying a portion of that testing set to dynamically replace certain names and numbers with new values—so a question about Sophie getting 31 building blocks for her nephew in GSM8K could become a question about Bill getting 19 building blocks for his brother in the new GSM-Symbolic evaluation.

    This approach helps avoid any potential “data contamination” that can result from the static GSM8K questions being fed directly into an AI model’s training data. At the same time, these incidental changes don’t alter the actual difficulty of the inherent mathematical reasoning at all, meaning models should theoretically perform just as well when tested on GSM-Symbolic as GSM8K.

    Instead, when the researchers tested more than 20 state-of-the-art LLMs on GSM-Symbolic, they found average accuracy reduced across the board compared to GSM8K, with performance drops between 0.3 percent and 9.2 percent, depending on the model. The results also showed high variance across 50 separate runs of GSM-Symbolic with different names and values. Gaps of up to 15 percent accuracy between the best and worst runs were common within a single model and, for some reason, changing the numbers tended to result in worse accuracy than changing the names.

    This kind of variance—both within different GSM-Symbolic runs and compared to GSM8K results—is more than a little surprising since, as the researchers point out, “the overall reasoning steps needed to solve a question remain the same.” The fact that such small changes lead to such variable results suggests to the researchers that these models are not doing any “formal” reasoning but are instead “attempt[ing] to perform a kind of in-distribution pattern-matching, aligning given questions and solution steps with similar ones seen in the training data.”

    Don’t Get Distracted

    Still, the overall variance shown for the GSM-Symbolic tests was often relatively small in the grand scheme of things. OpenAI’s ChatGPT-4o, for instance, dropped from 95.2 percent accuracy on GSM8K to a still-impressive 94.9 percent on GSM-Symbolic. That’s a pretty high success rate using either benchmark, regardless of whether or not the model itself is using “formal” reasoning behind the scenes (though total accuracy for many models dropped precipitously when the researchers added just one or two additional logical steps to the problems).

    The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding “seemingly relevant but ultimately inconsequential statements” to the questions. For this “GSM-NoOp” benchmark set (short for “no operation”), a question about how many kiwis someone picks across multiple days might be modified to include the incidental detail that “five of them [the kiwis] were a bit smaller than average.”

    Adding in these red herrings led to what the researchers termed “catastrophic performance drops” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent, depending on the model tested. These massive drops in accuracy highlight the inherent limits in using simple “pattern matching” to “convert statements to operations without truly understanding their meaning,” the researchers write.

    [ad_2]

    Kyle Orland, Ars Technica

    Source link

  • How to Stop Your Data From Being Used to Train AI

    How to Stop Your Data From Being Used to Train AI

    [ad_1]

    If you’re using a personal Adobe account, it’s easy to opt out of the content analysis. Open up Adobe’s privacy page, scroll down to the Content analysis for product improvement section, and click the toggle off. If you have a business or school account, you are automatically opted out.

    Amazon: AWS

    AI services from Amazon Web Services, like Amazon Rekognition or Amazon CodeWhisperer, may use customer data to improve the company’s tools, but it’s possible to opt out of the AI training. This used to be one of the most complicated processes on the list, but it’s been streamlined in recent months. Outlined on this support page from Amazon is the full process for opting out your organization.

    Figma

    Figma, a popular design software, may use your data for model training. If your account is licensed through an Organization or Enterprise plan, you are automatically opted out. On the other hand, Starter and Professional accounts are opted in by default. This setting can be changed at the team level by opening the settings to the AI tab and switching off the Content training.

    Google Gemini

    For users of Google’s chatbot, Gemini, conversations may sometimes be selected for human review to improve the AI model. Opting out is simple, though. Open up Gemini in your browser, click on Activity, and select the Turn Off drop-down menu. Here you can just turn off the Gemini Apps Activity, or you can opt out as well as delete your conversation data. While this does mean in most cases that future chats won’t be seen for human review, already selected data is not erased through this process. According to Google’s privacy hub for Gemini, these chats may stick around for three years.

    Grammarly

    Grammarly updated its policies, so personal accounts can now opt out of AI training. Do this by going to Account, then Settings, and turning the Product Improvement and Training toggle off. Is your account through an enterprise or education license? Then, you are automatically opted out.

    Grok AI (X)

    Kate O’Flaherty wrote a great piece for WIRED about Grok AI and protecting your privacy on X, the platform where the chatbot operates. It’s another situation where millions of users of a website woke up one day and were automatically opted in to AI training with minimal notice. If you still have an X account, it’s possible to opt out of your data being used to train Grok by going to the Settings and privacy section, then Privacy and safety. Open the Grok tab, then deselect your data sharing option.

    HubSpot

    HubSpot, a popular marketing and sales software platform, automatically uses data from customers to improve its machine-learning model. Unfortunately, there’s not a button to press to turn off the use of data for AI training. You have to send an email to privacy@hubspot.com with a message requesting that the data associated with your account be opted out.

    LinkedIn

    Users of the career networking website were surprised to learn in September that their data was potentially being used to train AI models. “At the end of the day, people want that edge in their careers, and what our gen-AI services do is help give them that assist,” says Eleanor Crum, a spokesperson for LinkedIn.

    You can opt out from new LinkedIn posts being used for AI training by visiting your profile and opening the Settings. Tap on Data Privacy and uncheck the slider labeled Use my data for training content creation AI models.

    OpenAI: ChatGPT and Dall-E

    OpenAI via Matt Burgess

    People reveal all sorts of personal information while using a chatbot. OpenAI provides some options for what happens to what you say to ChatGPT—including allowing its future AI models not to be trained on the content. “We give users a number of easily accessible ways to control their data, including self-service tools to access, export, and delete personal information through ChatGPT. That includes easily accessible options to opt out from the use of their content to train models,” says Taya Christianson, an OpenAI spokesperson. (The options vary slightly depending on your account type, and data from enterprise customers is not used to train models).

    [ad_2]

    Matt Burgess, Reece Rogers

    Source link

  • The OpenAI Talent Exodus Gives Rivals an Opening

    The OpenAI Talent Exodus Gives Rivals an Opening

    [ad_1]

    When investors poured $6.6 billion into OpenAI last week, they seemed largely unbothered by the latest drama, which recently saw the company’s chief technology officer, Mira Murati, along with chief research officer Bob McCrew and Barret Zoph, a vice president of research, abruptly quit.

    And yet those three departures were just the latest in an ongoing exodus of key technical talent. Over the past few years, OpenAI has lost several researchers who played crucial roles in developing the algorithms, techniques, and infrastructure that helped make it the world leader in AI as well as a household name. Several other ex-OpenAI employees who spoke to WIRED said that an ongoing shift to a more commercial focus continues to be a source of friction.

    “People who like to do research are being forced to do product,” says one former employee who works at a rival AI company but has friends at OpenAI. This person says some of their contacts at the firm have reached out in recent weeks to inquire about jobs. OpenAI itself has also seemingly shifted in its hiring priorities, according to data compiled for WIRED by Lightcast, a company that tracks job postings to analyze labor trends. In 2021, 23 percent of its job postings were for general research roles. In 2024 general research accounted for just 4.4 percent of job postings.

    The brain drain could have lasting implications for OpenAI’s direction and future success. Experts and former employees say the company still has a deep bench of talent, but competition is intensifying, making it more challenging to maintain an edge.

    The latest big-name departure, revealed on Thursday, is that of Tim Brooks, head of OpenAI’s Sora AI video generation project. Brooks posted on X that he would join one of OpenAI’s main rivals, Google DeepMind.

    “It could start to change things,” says a former OpenAI staff member, who now works in academia, of the losses. They asked to remain anonymous out of concern for harming collaborative relationships with the AI industry.

    For now, this person says, many students still put OpenAI at the top of their list of potential employers. It is seen as several months ahead of the competition, and prospective employees are often willing to put up with the apparent drama and infighting to be part of that. But applicants are also often drawn to working with a particular researcher or team, and their calculations could change as more big-name researchers leave for rival AI companies or their own startups.

    A look at some of OpenAI’s most important research shows how much talent has departed. Of 31 people listed as authors of an early version of OpenAI’s GPT large language model, fewer than half remain at OpenAI, according to employment details sourced from LinkedIn or other public social media profiles. Several members of the team responsible for developing GPT left OpenAI in 2021 to form Anthropic, now a major rival. Roughly a third of those listed in the acknowledgements for a technical blog post describing ChatGPT have since left.

    [ad_2]

    Will Knight

    Source link

  • Sam Altman Could Be Worth $13B as OpenAI Seeks Restructuring

    Sam Altman Could Be Worth $13B as OpenAI Seeks Restructuring

    [ad_1]

    Without OpenAI equity, Sam Altman is already worth $2 billion. Stefano Guidi/Getty Images

    OpenAI CEO Sam Altman, who has famously claimed he doesn’t own any equity in the $157 billion A.I. company he runs, could soon be a multibillionaire as OpenAI is reportedly looking to grant him a 7 percent equity stake, worth $11 billion, according to Reuters. While Altman denied the report, OpenAI chairman Bret Taylor issued a statement saying the company’s board indeed had discussions about “whether it would be beneficial to the company and our mission to have Sam be compensated with equity, but no specific figures have been discussed nor have any decisions been made.”

    The pressure to give Altman equity likely came from external investors. Fortune reported on last week that OpenAI investors are “pushing hard” for him to have skin in the game in order to raise massive funding. Reuters reported last month that OpenAI was ready to raise $6.5 billion from investors contingent on whether the company can change its corporate structure and remove a profit cap for investors.

    Restructuring the company and giving Altman equity would reassure OpenAI investors that the leadership team is committed to maximizing their returns, which is crucial as OpenAI seeks more funding to meet its ambitious goals. Earlier this year, Altman made headlines for eventually wanting to raise up to $7 trillion in funding—more than Germany’s annual GDP, the world’s third largest—to achieve its long-term goals.

    While Altman has not confirmed his plan to transition OpenAI into a for-profit structure, he said at a technology conference in Italy last week that OpenAI had been considering a restructuring to get to the “next stage.” Also last week, OpenAI’s CTO, Mira Murati, announced resignation. So did two other senior executives, Barret Zoph and Bob McGrew. Industry observers wondered whether their exits were related to the company’s restructuring, although Altman denied such speculations at the Italy conference.

    OpenAI was founded in 2015 as a nonprofit research lab funded by donations from billionaires like Reid Hoffman and Elon Musk. Realizing that “donations alone would not scale with the cost of computational power and talent required to push core research forward,” according to its website, OpenAI in 2019 introduced a for-profit arm. The arm operates by a capped-profit model but the cap is so high that it might as well not exist—it allows OpenAI’s investors to reap a gain of up to 100 times their initial investments.

    Altman is already a billionaire

    Altman, 39, is currently estimated to be worth $2 billion, according to Bloomberg. He has $1.2 billion invested across a range of venture capital funds branded as Hydrazine Capital, along with an additional $434 million in Apollo Projects.

    Altman owns shares in several high-flying tech companies, including a 8.7 percent stake in Reddit. In 2021, he invested $375 million in Helion Energy, a startup building the world’s first fusion plant. In 2022, he invested $180 million Retro Biosciences, a startup focused on slowing aging.

    At a congressional hearing last May, Altman said he owned “no equity in OpenAI.” In a later statement through OpenAI spokesperson Steve Sharpe, Altman confirmed he doesn’t own profit-participation units either, an OpenAI scheme that gives employees a right to earn a given percentage of the company’s profit, similar to equity compensation.

    According to regulatory filings, Altman owns 75 percent of the OpenAI Startup Fund, an independent entity associated with OpenAI but doesn’t receive funding from the company. The fund manages $325 million in assets to invest in smaller A.I. companies. However, Sharpe said Altman has not invested his own money, so he cannot financially benefit from the fund. In April, Altman was removed as an owner or controller of the startup fund over scrutiny that it’s too closely tied to OpenAI despite claiming independence.

    Sam Altman Could Be Worth $13B as OpenAI Seeks Restructuring

    [ad_2]

    Shreyas Sinha

    Source link

  • OpenAI’s ChatGPT Breaks Out of Its Box—and Onto a Canvas

    OpenAI’s ChatGPT Breaks Out of Its Box—and Onto a Canvas

    [ad_1]

    Although both writing and coding modes give the choice of requesting in-line edits, the bifurcated user interface for canvas is designed with one additional set of shortcuts for those focused on AI-assisted writing and another for coders. In the demo, Levine showed off how the writer’s shortcut could be used to condense the number of words in a canvas or attempt to perform a “final polish” on the draft. He also used one of the more lighthearted shortcuts to add a bunch of random emoji. On the coder’s side, ChatGPT can add logs, comments, and attempt to troubleshoot problems in a canvas.

    ChatGPT saves different versions of the canvas as you’re iterating, so you can return to old versions if you end up preferring that output. Writers who may be worried about what they upload being used by OpenAI to train its model should go into their user settings and make sure that “model training” is toggled off.

    By allowing ChatGPT to make edits as well as suggestions, OpenAI is blurring the line between authorship and word curation. As someone who works with professional editors daily, I’m skeptical the canvas beta will match their incisive notes and careful guidance. But for people who don’t have easy access to human writing partners, I can see how getting synthetic notes on a composition about structure and content would be beneficial.

    It’s worth noting that three people listed as “supporting leadership” on the canvas project are no longer with the company. Former post-training colead and cofounder John Schulman left in August and now works at Anthropic, a rival AI company. Additionally, former chief technology officer Mira Murati and research vice president Barret Zoph both stepped down from their positions a week before this launch. At a press event in the OpenAI office after the departures, current chief product officer Kevin Weil reaffirmed the company’s commitment to continue releasing software.

    “I think 2025 is gonna be the year that agentic systems finally hit the mainstream,” he says. The idea of an AI “agent” that can not only work through software tasks alongside you, but is also nimble enough to be sent off into the digital wilderness to do things on your behalf, is simultaneously generative AI’s recent past and projected future.

    Last year, WIRED covered ChatGPT’s plug-ins people could use for tasks, like booking flights with Expedia or making a reservation with OpenTable—arguably a step toward more “agentic” AI tools. However, plug-ins were later wound down, with more limited custom GPT chatbots launched in their place.

    Keeping that in mind, the beta release of canvas does appear to be another attempt at augmenting AI models with more decisionmaking abilities, which can lead to surprises. During one of WIRED’s demos, Levine highlighted a portion of the canvas and requested an edit, and ChatGPT subsequently made an in-line change near the bottom, outside his highlight. “The really interesting thing is oftentimes, if you highlight a section, it will make an edit in that part,” he says. “But ChatGPT has the option to decide where to edit.”

    The closest alternative to OpenAI’s canvas tool available right now is probably Google’s Gemini integration that lets you use generative AI inside of Docs or Anthropic’s Artifacts tool. Chatbots definitely aren’t dead, but AI companies are now acknowledging the format’s constraints and looking for ways to diversify their software to uncover novel, sticky user interfaces. Google recently received praise in tech circles for its entertaining AI podcasts—even CEO Sam Altman lauded the tool.

    With billions of investment dollars still flowing through Silicon Valley to AI companies, consumers can expect to see more of these structural experiments that build on existing tools, like AI podcast hosts and AI document editors, to be released with a regular cadence over the next year. The chatbot race is far from over, and future iterations on the technology are likely to stray far away from that drab chatbox, and toward a more multifaceted approach.

    [ad_2]

    Reece Rogers

    Source link

  • This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    This Facial Recognition Experiment With Meta’s Smart Glasses Is a Terrifying Vision of the Future

    [ad_1]

    Two college students have used Meta’s smart glasses to build a tool that quickly identifies any stranger walking by and brings up that person’s sensitive information, including their home address and contact information, according to a demonstration video posted to Instagram. And while the creators say they have no plans to release the code for their project, the demo gives us a peek at humanity’s very likely future—a future that used to be confined to dystopian sci-fi movies.

    The two people behind the project, AnhPhu Nguyen and Caine Ardayfio, are students working on computer science at Harvard who often post their tech experiments on social media, including 3D printed images and wearable flame-throwers. But it’s their latest experiment, first spotted by 404 Media, that’s probably going to make a lot of people feel uneasy.

    An Instagram video posted by Nguyen explains how the two men built a program that feeds the visual information from Meta Ray Ban smart glasses into facial recognition tools like Pimeyes, which have essentially scraped the entire web to identify where that person’s face shows up online. From there, a large language model infers the likely name and other details about that person. That name is then fed to various websites that can reveal the person’s home address, phone number, occupation or other organizational affiliations, and even the names of relatives.

    “To use it, you just put the glasses on, then as you walk by people, the glasses will detect when somebody’s face is in frame. This photo is used to analyze them, and after a few seconds, their personal information pops up on your phone,”  Nguyen explains in the Instagram video.

    Nguyen and Ardayfio call their project I-XRAY and it’s pretty stunning how much information they’re able to pull up in a short amount of time. They’re quick to point out that many of these tools have only become widely available in the past few years. For example, Meta’s smart glasses with camera capabilities that look like regular eyeglasses were only released last year. And the kind of LLM data extraction they’re achieving was only possible in the past two years. Even the ability to look up partial social security numbers (thanks to all those data leaks you read about every day now) was only possible at the consumer level since 2023.

    As you can see in the video, they also approached strangers and acted like they knew those people from elsewhere after instantly looking up their information.

    “The system leverages the ability of LLMs to understand, process, and compile vast amounts of information from diverse sources–inferring relationships between online sources, such as linking a name from one article to another, and logically parsing a person’s identity and personal details through text,” the creators say in an explanation document posted to Google Drive. “This synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction that was previously not possible with traditional methods alone.”

    The creators list the tools they used in their release, noting that anyone can request that those services remove their information. For reverse facial search engines, there’s Pimeyes and Facecheck ID. For search engines that include personal information there’s FastPeopleSearch, CheckThem, and Instant Checkmate. As for the social security number information, there’s no way to get that stuff removed, so the students recommend freezing your credit.

    The students didn’t immediately respond to questions from Gizmodo on Wednesday morning. Meta also didn’t respond to a request for comment. We’ll update this post if we hear back. But in the meantime, we should all probably get ready for this kind of tech to emerge more widely since this kind of technological mash-up feels inevitable at this point—especially if any of the new smart glasses that guys like Mark Zuckerberg love so much really become mainstream.

    It may take quite a while for the biggest tech companies to get behind it, but just as we saw OpenAI essentially shoot the starting gun for consumer-facing generative AI, any small upstart could plausibly make this product happen and start the dominoes falling for other larger tech companies to get this future started. Let’s cross our fingers and hope for the best, given the privacy implications. It really feels like nobody will have any semblance of anonymity in public once this ball gets rolling.

    [ad_2]

    Matt Novak

    Source link

  • OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_1]

    Mira Murati, Ilya Sutskever, Greg Brockman and Andrej Karpathy (clockwise, starting at top left). Photos by Slaven Vlasic/Getty Images, JACK GUEZ/AFP via Getty Images, Anna Moneymaker/Getty Images and Michael Macor/The San Francisco Chronicle via Getty Images

    Since ChatGPT took the world by storm in late 2022, OpenAI’s revenue and market value have skyrocketed. But internally, the company hasn’t necessarily had the smoothest ride. The A.I. giant, valued at $150 billion, lost a slew of top executives this year. On Wednesday (Sept. 25) alone, a trio of leaders, including chief technology officer Mira Murati, chief research officer Bob McGrew, and VP of research Barret Zoph, all announced their departures. They join a larger group of former OpenAI employees who have left for rival A.I. developers and startups. As of now, CEO Sam Altman is one of only two active remaining members of the company’s original 11-person founding team.

    OpenAI hasn’t just lost employees—it has also rehired some familiar faces. In May, OpenAI welcomed back Kyle Kosic, who worked at the company between 2021 and 2023 on its technical staff. Kosic left last year to join Elon Musk’s xAI. Several other outgoing OpenAI employees have taken similar routes and gone on to work for competing A.I. companies, showing just how competitive the industry is at the moment.

    Here’s a look at some of the top leaders OpenAI has lost in 2024 thus far:

    Andrej Karpathy, research scientist

    Andrej Karpathy has left OpenAI not once but twice. One of OpenAI’s 11 founders, Karpathy helped build the company’s team on computer vision, generative modeling and reinforcement learning. He first departed in 2017 to lead Tesla’s Autopilot effort. Returning to OpenAI in 2023, Karpathy left once again in February this year to focus on “personal projects.” He subsequently established Eureka Labs, an A.I. education startup.

    Ilya Sutskever, chief scientist and co-head of the super alignment team

    A renowned machine learning researcher, Ilya Sutskever helped co-found OpenAI nearly a decade ago and served as the company’s chief scientist. He was also notably a member of the four-person board that temporarily ousted Altman last year before reinstating him. Sutskever, who was subsequently removed from the board, later said he regretted his involvement in the brief ouster. In May, he announced his departure from OpenAI and said he was leaving for a venture that is “very personally meaningful.”

    This project was revealed to be Safe Superintelligence, a startup focused on developing a safe form of artificial general intelligence (AGI), a type of A.I. that can think and learn on par with humans. Earlier this month, the company was valued at $5 billion after raising $1 billion from investors, including Andreessen Horowitz and Sequoia Capital.

    Jan Leike, co-head of the super alignment team

    Just days after Sutskever left, OpenAI executive Jan Leike announced his resignation as well. Sutskever and Leike co-ran the company’s safety team, which has since been disbanded. Leike said he decided to leave in part due to disagreements with OpenAI leadership “about the company’s core priorities,” citing a lack of focus on safety processes around developing AGI. Leike has since taken up a new role as head of alignment science at Anthropic, an OpenAI rival founded by former OpenAI employees Dario Amodei and Daniela Amodei.

    John Schulman, head of alignment science

    John Schulman, another OpenAI co-founder, made significant contributions to the creation of ChatGPT. After Leike’s departure, Schulman became head of OpenAI’s alignment science efforts and was appointed to its new safety committee in May. That’s why Schulman’s decision in August to step away from the company came as a surprise—especially when he revealed that he would be joining Anthropic. “This choice stems from my desire to deepen my focus on A.I. alignment and to start a new chapter of my career where I can return to hands-on technical work,” said Schulman on X, where he also clarified that his decision to step away from OpenAI wasn’t connected to a lack of support for alignment research.

    Peter Deng, vice president of consumer product

    Peter Deng, a top OpenAI product executive, also decided to step away from the company earlier this year. Having first joined OpenAI last year, he ended his tenure as vice president of product in July, according to his LinkedIn. Deng, who also previously held product leader positions at companies like Uber (UBER) and Meta (META), has not publicly revealed his next steps.

    Greg Brockman, president

    Greg Brockman, often seen as Altman’s right-hand man, hasn’t technically left the company but is instead taking a sabbatical through the end of 2024. In August, he announced his time off and described it as the “first time to relax since co-founding OpenAI nine years ago.” Brockman started off as OpenAI’s chief technology officer before becoming the company’s president in 2022. He indicated that he plans to return to OpenAI, noting that “the mission is far from complete; we still have a safe AGI to build.”

    Mira Murati, chief technology officer

    Mira Murati, one of OpenAI’s most public-facing figures, resigned earlier this week after more than six years with the company. “I’m stepping away because I want to create the time and space to do my own exploration,” said Murati, who notably served as interim CEO during Altman’s brief ousting last year, on X. Adding that she will “still be rooting” for OpenAI, Murati said her primary focus currently is “doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.” Altman praised her leadership in a statement on X, describing Murati as instrumental to OpenAI’s “development from an unknown research lab to an important company.”

    Bob McGrew, chief research officer

    Shortly after Murati’s resignation, Bob McGrew, OpenAI’s chief research officer, also announced plans to leave the company. He simply said on X, “It is time for me to take a break.” Having previously worked at PayPal (PYPL) and Palantir, McGrew started off as a member of OpenAI’s technical staff and has been serving as OpenAI’s chief research officer since August.

    Barret Zoph, vice president of research

    Barret Zoph is the third executive who announced his resignation this week. Like his two colleagues, Zoph said it’s a “personal decision based on how I want to evolve the next phase of my career.” Zoph, a former research scientist at Google (GOOGL), joined OpenAI in 2022 and played a large role in overseeing OpenAI’s post-training team.

    Murati, McGrew and Zoph made their decisions independently of each other, according to Altman, but decided to depart simultaneously “so that we can work together for a smooth handover to the next generation of leadership.” The CEO conceded that, while the abruptness of the leadership changes isn’t the most natural, “we are not a normal company.”

    OpenAI’s Leadership Exodus: 9 Key Execs Who Left the A.I. Giant This Year

    [ad_2]

    Alexandra Tremayne-Pengelly

    Source link

  • OpenAI reportedly plans to increase ChatGPT’s price to $44 within five years

    OpenAI reportedly plans to increase ChatGPT’s price to $44 within five years

    [ad_1]

    OpenAI is reportedly telling investors that it plans on charging $22 a month to use ChatGPT by the end of the year. The company also plans to aggressively increase the monthly price over the next five years up to $44.

    The documents obtained by shows that OpenAI took in $300 million in revenue this August, and expects to make $3.7 billion in sales by the end of the year. Various expenses such as salaries, rent and operational costs will cause the company to lose $5 billion this year.

    OpenAI is reportedly circulating the documents the NYT reported on as part of a drive to find new investors to prevent or lessen its financial shortfall. Fortunately, OpenAI is raising money on a $150 billion valuation, and a new round of investments could bring in as much as $7 billion.

    OpenAI is also reportedly in the midst of switching from . The business model allows for the removal of any caps on investor returns so they’ll have more room to negotiate for new investors at possibly higher rates.

    [ad_2]

    Danny Gallagher

    Source link

  • Ta-Nehisi Coates & Sarah Silverman Win Bid For Meta “Chief Decision Maker” Mark Zuckerberg To Be Deposed In AI Suit

    Ta-Nehisi Coates & Sarah Silverman Win Bid For Meta “Chief Decision Maker” Mark Zuckerberg To Be Deposed In AI Suit

    [ad_1]

    Mark Zuckerberg really doesn’t want to have answer some hard questions about Meta‘s Artificial Intelligence push and goals However, a federal judge this week has told the Facebook founder that is exactly what he has to do.

    “Plaintiffs have made an evidentiary showing that Zuckerberg is the chief decision maker and policy setter for Meta’s Generative AI branch and the development of the large language models at issue in this action,” U.S. District Judge Thomas Hixson noted on September 24 in the potential class action initially filed by authors Sarah Silverman, Richard Kadrey, and Christopher Goldenm last year, and now including Ta-Nehisi Coates and others.

    Along with a more Imperiled suit against OpenAI, the writers have took Meta to court in mid-2023 over copyright infringement concerns that their work and books have been illegally downloaded and used to train the company’s large language model AI software.

    Bedwetter scribe Silverman and National Book Award winner Coates, along with other plaintiffs allege that “much of the material in Meta’s training dataset, however, comes from copyrighted works —including books written by Plaintiffs—that were copied by Meta without consent, without
    credit, and without compensation.”

    With some legal wiggle room here and there, Meta denies they accessed the author’s work for their LLaMA system. Meta’s army of attorneys have also been trying to push the line that there are loads of other people at the tech giant better qualified than Zuckerberg to be questioned by David Boises and other lawyers for the plaintiffs.

    It didn’t fly.

    “Plaintiffs do not generically argue, as Meta suggests, that because Zuckerberg is the CEO of the company that he is therefore in charge of everything,” the judge noted in his order denying Meta’s motion to keep the CEO from having to face Silverman and others lawyers’ inquiries. “Rather, they have submitted evidence of his specific involvement in the company’s AI initiatives. They have submitted evidence indicating Zuckerberg was the principal decision maker concerning Meta’s decision to open source the language model. They have also submitted evidence of Zuckerberg’s direct supervision of Meta’s AI products.”

    Judge Hixon also stated: “Given this factual showing, the Court is not going to require Plaintiffs to exhaust other forms of discovery before they depose Zuckerberg. They’ve made a solid case that this deposition is worth taking.”

    Never a big fan of being put in front of a microphone, Zuckerberg’s depo has yet to have a time and date scheduled. With that, a hearing on discovery in the case just wrapped up earlier this afternoon in San  San Francisco that could see the deposition occurring sooner rather than later.

    By then, everything AI could be different, again.

    Coming up on two years since ChatGPT brought AI to the masses, so to speak, the technology is quickly moving more and more to the fore on almost all aspects of society and industry.

    The results are mixed, depending on your perspective.

    On the one hand, for instance, California Gov. Gavin Newsom signed legislation earlier this month to partially protect the likeness of actors and performers, living and deal. At almost the same time, Lionsgate and applied AI research company Runway unveiled a partnership on September 18 to develop AI customized to the studio’s proprietary portfolio of film and television content like John Wick.

    With a bit of a nose thumbing to the court and nudge towards the seemingly inevitable future, Zuckerberg was on stage today in Menlo Park, California at the company’s Meta Connects conference to speak on all things AI. A part of the roll-out and announcements was the news that Meta’s  AI chatbot will now communicate in the voices of Awkwafina, Dame Judi Dench, Kristin Bell, John Cena, or Keegan-Michael Key.

    Sadly, Zuckerberg will have to give his deposition in his own voice.

    [ad_2]

    Dominic Patten

    Source link

  • OpenAI CTO Mira Murati Is Leaving the Company

    OpenAI CTO Mira Murati Is Leaving the Company

    [ad_1]

    OpenAI chief technology officer Mira Murati resigned on Wednesday, saying she wants “the time and space to do my own exploration.” Murati had been among the three executives at the very top of the company behind ChatGPT, and she was briefly its leader last year while board members wrestled with the fate of CEO Sam Altman.

    “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right,” she wrote in a message to OpenAI staff that she posted on X.

    Altman replied to Murati’s X post writing that “it’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally.” He added that he feels “personal gratitude towards her for the support and love during all the hard times.”

    A successor wasn’t immediately announced.

    Murati, through a personal spokesperson, declined to provide further comment. OpenAI also declined to comment, referring inquiries to Murati’s tweet.

    Murati previously worked at Tesla and Leap Motion before joining OpenAI in 2018. At the time, OpenAI was a small nonprofit research lab focused on developing an AI system capable of mirroring a wide range of human tasks. But in the wake of the stunning success of ChatGPT, the organization has ballooned and its focus has increasingly turned commercial. The company has been rethinking its nonprofit structure, while investors have been increasingly eager to bet billions of dollars on its future.

    Murati came to OpenAI believing that AI would “be the most important set of technologies that humanity has ever built,” she told Fortune last year. “OpenAI’s mission really resonated with me, to build a technology that benefits people.”

    OpenAI was rocked by a dramatic board coup last November that saw CEO Sam Altman removed from his post and briefly replaced by Murati. After most of the staff threatened to resign, and following pleas from investors including Microsoft, which had poured billions into the company, Altman was reinstated with an all new board.

    In the months that have followed, several of OpenAI’s leadership along with senior engineering figures have stepped away from the company. Ilya Sutskever, one of the company’s first hires, the technical brains behind much of its earlier work, and a board member who voted to remove Altman before recanting, resigned from the company in May.

    Sutskever’s departure was followed shortly after by that of Jan Leike, an engineer who led work on long-term AI safety with Sutskever. John Schulman, the engineer who took over leadership of safety work, stepped down in August. In August, Greg Brockman, a cofounder of OpenAI and a board member who stood with Altman, said he was taking a sabbatical from the company until the end of the year.

    A number of former OpenAI executives and researchers have gone on to start new AI companies. Notably, Sutskever this year launched Safe Superintelligence, which focuses on developing safe artificial intelligence. Former OpenAI research chief Dario Amodei and his sister Daniela in 2021 founded Anthropic, now one of the company’s primary rivals for customers.

    This is a developing story. Please check back for updates.

    [ad_2]

    Paresh Dave, Will Knight

    Source link

  • The Most Capable Open Source AI Model Yet Could Supercharge AI Agents

    The Most Capable Open Source AI Model Yet Could Supercharge AI Agents

    [ad_1]

    The most capable open source AI model with visual abilities yet could see more developers, researchers, and startups develop AI agents that can carry out useful chores on your computers for you.

    Released today by the Allen Institute for AI (Ai2), the Multimodal Open Language Model, or Molmo, can interpret images as well as converse through a chat interface. This means it can make sense of a computer screen, potentially helping an AI agent perform tasks such as browsing the web, navigating through file directories, and drafting documents.

    “With this release, many more people can deploy a multimodal model,” says Ali Farhadi, CEO of Ai2, a research organization based in Seattle, Washington, and a computer scientist at the University of Washington. “It should be an enabler for next-generation apps.”

    So-called AI agents are being widely touted as the next big thing in AI, with OpenAI, Google, and others racing to develop them. Agents have become a buzzword of late, but the grand vision is for AI to go well beyond chatting to reliably take complex and sophisticated actions on computers when given a command. This capability has yet to materialize at any kind of scale.

    Some powerful AI models already have visual abilities, including GPT-4 from OpenAI, Claude from Anthropic, and Gemini from Google DeepMind. These models can be used to power some experimental AI agents, but they are hidden from view and accessible only via a paid application programming interface, or API.

    Meta has released a family of AI models called Llama under a license that limits their commercial use, but it has yet to provide developers with a multimodal version. Meta is expected to announce several new products, perhaps including new Llama AI models, at its Connect event today.

    “Having an open source, multimodal model means that any startup or researcher that has an idea can try to do it,” says Ofir Press, a postdoc at Princeton University who works on AI agents.

    Press says that the fact that Molmo is open source means that developers will be more easily able to fine-tune their agents for specific tasks, such as working with spreadsheets, by providing additional training data. Models like GPT-4 can only be fine-tuned to a limited degree through their APIs, whereas a fully open model can be modified extensively. “When you have an open source model like this then you have many more options,” Press says.

    Ai2 is releasing several sizes of Molmo today, including a 70-billion-parameter model and a 1-billion-parameter one that is small enough to run on a mobile device. A model’s parameter count refers to the number of units it contains for storing and manipulating data and roughly corresponds to its capabilities.

    Ai2 says Molmo is as capable as considerably larger commercial models despite its relatively small size, because it was carefully trained on high-quality data. The new model is also fully open source in that, unlike Meta’s Llama, there are no restrictions on its use. Ai2 is also releasing the training data used to create the model, providing researchers with more details of its workings.

    Releasing powerful models is not without risk. Such models can more easily be adapted for nefarious ends; we may someday, for example, see the emergence of malicious AI agents designed to automate the hacking of computer systems.

    Farhadi of Ai2 argues that the efficiency and portability of Molmo will allow developers to build more powerful software agents that run natively on smartphones and other portable devices. “The billion parameter model is now performing in the level of or in the league of models that are at least 10 times bigger,” he says.

    Building useful AI agents may depend on more than just more efficient multimodal models, however. A key challenge is making the models work more reliably. This may well require further breakthroughs in AI’s reasoning abilities—something that OpenAI has sought to tackle with its latest model o1, which demonstrates step-by-step reasoning skills. The next step may well be giving multimodal models such reasoning abilities.

    For now, the release of Molmo means that AI agents are closer than ever—and could soon be useful even outside of the giants that rule the world of AI.

    [ad_2]

    Will Knight

    Source link

  • I Stared Into the AI Void With the SocialAI App

    I Stared Into the AI Void With the SocialAI App

    [ad_1]

    The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

    Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

    But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

    There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

    When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

    Screenshot of the instructions for setting up the Social AI app.

    Is SocialAI appalling, an echo chamber taken to its logical extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.

    [ad_2]

    Lauren Goode

    Source link