ReportWire

Tag: Technology

  • Anthropic refuses to bend to Pentagon on AI safeguards as dispute nears deadline

    [ad_1]

    A public showdown between the Trump administration and Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business.

    Anthropic CEO Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the Pentagon’s final demand to allow unrestricted use of its technology.

    Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary Pete Hegseth posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.

    If Amodei doesn’t budge, military officials have warned they will not just pull Anthropic’s contract but also “deem them a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.

    And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks.

    Anthropic said it sought narrow assurances from the Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”

    That was after Sean Parnell, the Pentagon’s top spokesman, posted on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions” and added the company has “until 5:01 p.m. ET on Friday to decide” if it would meet the demands or face consequences.

    Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”

    That message hasn’t resonated in much of Silicon Valley, where a growing number of tech workers from Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand late Thursday in an open letter.

    OpenAI and Google, along with Elon Musk’s xAI, also have contracts to supply their AI models to the military.

    “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter says. “They’re trying to divide each company with fear that the other will give in.”

    Also raising concerns about the Pentagon’s approach were Republican and Democratic lawmakers and a former leader of the Defense Department’s AI initiatives.

    “Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a social media post.

    Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry.

    “Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”

    He said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.

    “They’re not trying to play cute here,” he wrote.

    Parnell asserted Thursday that the Pentagon wants to “ use Anthropic’s model for all lawful purposes” and said opening up use of the technology would prevent the company from “jeopardizing critical military operations,” though neither he nor other officials have detailed how they want to use the technology.

    The military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” Parnell wrote.

    When Hegseth and Amodei met Tuesday, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.

    Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He said he hopes the Pentagon will reconsider given Claude’s value to the military, but, if not, Anthropic “will work to enable a smooth transition to another provider.”

    —-

    AP reporter Konstantin Toropin contributed to this report.

    [ad_2]

    Source link

  • Despite clouds and fog, SpaceX successfully launches Starlink mission

    [ad_1]

    CAPE CANAVERAL SPACE FORCE STATION — While clouds were a bit of a concern, SpaceX was able to successfully launch nearly 30 Starlink satellites atop a Falcon 9 rocket Friday morning.


    What You Need To Know

    • A Falcon 9 rocket sent up Starlink 6-108 mission from Space Launch Complex 40 Friday morning
    • This will be the 30th launch of this Falcon 9’s first-stage booster

    The Falcon 9 rocket was carrying Starlink 6-108 mission from Space Launch Complex 40, Cape Canaveral Space Force Station, stated SpaceX

    The 7:17 a.m. liftoff was within the launch window, which opened at 4:52 a.m. ET and was set to close at 8:52 a.m. 

    The 45th Weather Squadron gave an 85% chance of good liftoff conditions, with the only concern being the cumulus cloud rule.

    Find out more about the weather criteria for a Falcon 9 launch.

    The Big 3 0!

    For this Falcon 9’s first-stage booster, called B1069, it will finally hit the big 3 0! This is one of the older first-stage boosters, with 29 missions in its resume.

    After the stage separation, the first-stage rocket is expected to land on the droneship A Shortfall of Gravitas that will be in the Atlantic Ocean.

    About the mission

    The 29 satellites from the Starlink company, owned by SpaceX, will be heading to low-Earth orbit to join its mechanical brothers and sisters.

    Once deployed and in their orbit, they will provide internet service to many parts of Earth.

    Dr. Jonathan McDowell, of Harvard-Smithsonian Center for Astrophysics, has been keeping track of Starlink satellites.

    Before this launch, McDowell recorded the following:

    • 9,826 are in orbit
    • 8,352 are in operational orbit

    [ad_2]

    Anthony Leone

    Source link

  • Fintech company Block lays off 4,000 of its 10,000 staff, citing gains from AI

    [ad_1]

    BANGKOK — Shares in the financial technology company Block soared more than 20% in premarket trading Friday after its CEO announced it was laying off more than 4,000 of its 10,000 plus employees, reconfiguring to capitalize on its use of artificial intelligence.

    “The core thesis is simple. Intelligence tools have changed what it means to build and run a company,” Jack Dorsey said in a letter to shareholders in Block, the parent company to online payment platforms such as Square and Cash App. “A significantly smaller team, using the tools we’re building, can do more and do it better,” he said.

    Dorsey’s comments explicitly naming AI as a key driver behind the move were also posted on X, or Twitter, a company he co-founded. The assertion that the job cuts will add to Block’s profitability and efficiency led investors to jump in and buy, analysts said.

    Block’s shares gained 5% Thursday to $54.53, before it reported its earnings. They shot up to nearly $69 in after-hours trading. The mobile payments services provider reported its fourth quarter gross profit jumped 24% from a year earlier.

    “For years, we have debated whether AI would dent jobs at the margin. Now we have a public case study in which the CEO explicitly says that intelligence tools have changed what it means to build and run a company,” Stephen Innes of SPI Asset Management said in a commentary.

    “Other large employers have announced tens of thousands of cuts in recent months. Some have downplayed the AI link. Block did not,” he said.

    A global technology company founded in 2009, San Francisco-based Block operates in the United States, Canada, parts of Europe, Australia and Japan.

    In a post on Twitter, Dorsey outlined various ways the company will support those laid off. For employees overseas, the terms might differ, he said.

    It was unclear which employees would be laid off where.

    Layoffs by American companies remain at relatively healthy levels, but the job cuts at Block are the latest among thousands announced in recent months.

    A number of other high-profile companies have announced layoffs recently, including UPS, Amazon, Dow and the Washington Post.

    [ad_2]

    Source link

  • Growing more complex by the day: How should journalists govern use of AI in their products?

    [ad_1]

    Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence plays a major role — grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.

    These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month. They’re inching toward a potential strike, in what is believed would be the first such job action in the news business where how to deal with AI is the chief sticking point.

    Few expect this dispute will be the last.

    AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.

    Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.

    Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.

    The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica’s journalists, negotiating its first contract with the the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.

    Along with holding informational pickets, union members pledged overwhelmingly that they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokeswoman for the New York Guild, the union that represents many journalists in the city.

    “It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic at the Poynter Institute journalism think tank.

    ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big is Happening,” that circulated widely this month. Author and investor Matt Shumer, who said he’s spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

    Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.

    Rather than make promises that can’t be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesman Tyson Evans said. In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.

    “We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”

    Fifty-seven of 283 contracts at U.S. news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.

    It won’t be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of U.S. outlets have done so.

    “I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

    The guild pushing for contracts that guarantee AI won’t eliminate jobs. That’s no surprise; unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.

    “Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”

    Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.

    Quinn’s newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process — a reporter using his or her judgment about how to tell a story — has been stripped from their hands. Quinn defended it as the best use of limited resources.

    Research shows that a vast majority of American consumers believe that it’s very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota. But here’s the rub: Such disclosure makes them trust the outlet’s stories less, not more.

    A significant minority — 30% in a study Toff conducted last year — doesn’t want AI used in journalism at all.

    Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the newsgathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter’s Mahadevan said.

    Two lawmakers in New York state — the nation’s publishing capital — introduced legislation this month requiring clear disclaimers when artificial intelligence is used in an published content. There’s no immediate word on its chances for passage, but both sponsors are Democrats in a legislature controlled by that party.

    Mahadevan believes it’s fair to have policies that requires human involvement — editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?

    “Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it’s important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”

    ___

    David Bauder writes about the intersection of media and entertainment for the AP. Follow him at http://x.com/dbauder and https://bsky.app/profile/dbauder.bsky.social.

    [ad_2]

    Source link

  • “I wanted to be on it all the time,” plaintiff says in landmark social media addiction trial

    [ad_1]

    A young woman who is battling against social media giants took the stand Thursday to testify about her experience using the platforms as she was growing up, saying she was on social media “all day long” as a child.

    The now 20-year-old, who has been identified in court documents as KGM, says her early use of social media addicted her to the technology and exacerbated depression and suicidal thoughts. Meta and YouTube are the two remaining defendants in the case, which TikTok and Snap have settled.

    The case, along with two others, has been selected as a bellwether trial, meaning its outcome could impact how thousands of similar lawsuits against social media companies are likely to play out.

    Early social media user

    KGM, or Kaley, as her lawyers have called her during the trial, started using YouTube at age 6 and Instagram at age 9.

    Kaley took the stand wearing a pink floral dress and a beige cardigan and said she was “very nervous” after her attorney, Mark Lanier, asked how she was doing Thursday morning.

    Lanier displayed childhood photos of Kaley and her family and asked about positive memories from her upbringing in a quiet cul-de-sac in Chico, California. She spoke of themed birthday parties, trips to Six Flags and her mom’s consistent efforts to make her childhood special.

    Still, Kaley’s relationship with her mother was challenging at times. Kaley said most of their arguments were over the use of her phone.

    Both the defendants and the plaintiff have pointed to a turbulent home life for Kaley. Her attorneys say she was preyed upon as a vulnerable user, but attorneys representing Meta and Google-owned YouTube have argued Kaley turned to their platforms as a coping mechanism or a means of escaping her mental health struggles.

    When asked about claims that her mother had hit her, abused her and neglected her, Kaley said “she wasn’t perfect, but she was trying her best,” and clarified that she doesn’t think she would label her mother’s past actions as abuse or neglect today. Kaley, who works as a personal shopper at Walmart, still lives with her mother in the home she grew up in.

    “It made me look popular”

    As a child, Kaley set up multiple accounts on both Instagram and YouTube so she could like and comment on her posts. She said she would also “buy” likes through a platform where she could like other people’s photos and get a slew of likes in return. “It made me look popular,” she said.

    Kaley was asked specifically about the features the plaintiffs argue are deliberately designed to be addictive, including notifications. Those notifications on both Instagram and YouTube gave her a “rush,” she said. She would receive them throughout the day and would go to the bathroom during school to check them — something she still does.

    Kaley said while she uses YouTube less often now, she believes she was previously addicted to it. “Anytime I tried to set limits for myself, it wouldn’t work and I just couldn’t get off,” she said.

    Filters on Instagram, specifically those that could change a person’s cosmetic appearance, have also loomed large in the case and were also a constant fixture of Kaley’s use. Lanier and his colleagues unfurled a nearly 35-foot-long canvas banner with photos Kaley has posted on Instagram. She said “almost all” of the photos had a filter on them.

    The jury was also shown Instagram posts and YouTube videos Kaley posted as a child and young teen. One video that tapped into the popular trend at the time, sharing a nighttime routine, showed a young Kaley scrolling on her phone, showering and taking off makeup and then returning to her phone to go on Instagram. Another video showed her saying she was “crying tears of joy” after surpassing 100 YouTube subscribers — but then she quickly turned to her looks, apologizing for her “ugly appearance.”

    “I look so fat in this shirt,” the young Kaley says in the video.

    Meta highlights mental health struggles

    Meta has argued that Kaley faced significant challenges before she ever used social media. The company’s lawyer, Paul Schmidt, said earlier this month that the core question in the case is whether the platforms were a substantial factor in Kayley’s mental health struggles. 

    During opening arguments, he spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

    Kaley said she did not experience the negative feelings associated with her body dysmorphia diagnosis before she began using social media and filters.

    Kaley was asked about her peak Instagram usage, which exceeded 16 hours one day. “I just felt like I wanted to be on it all the time, and if I wasn’t on it, I felt like I was going to miss out on something,” she said.

    When she tried to stop using the platforms, she said she was often unsuccessful.

    “Every single day, I was on it all day long,” she said.

    Therapist’s testimony

    Victoria Burke, a former therapist Kaley worked with in 2019, testified on Wednesday, and Burke said her social media and her sense of self “were closely related,” adding that what was happening on the platforms could “make or break her mood.”

    Burke’s treatment of Kaley lasted about six months and that period took place seven years ago.

    The case has been the subject of intense interest among both advocacy groups lobbying for enhanced child safety protections and the tech world alike, with high-profile testimony from the head of Instagram, Adam Mosseri and Meta CEO Mark Zuckerberg.

    During Zuckerberg’s testimony, when he was asked if people tend to use something more if it’s addictive, he said “I’m not sure what to say to that.”

    “I don’t think that applies here,” he continued. He said he believes in the “basic assumption” that “if something is valuable, people will use it more because it’s useful to them.” Mosseri also said he didn’t believe people could become clinically addicted to social media platforms.

    The case is expected to continue for several weeks, with a ruling potentially shaping the outcome of a slew of similar lawsuits against social media companies.

    [ad_2]

    Source link

  • Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_1]

    Sam Altman reportedly courted Pang for months. Andrew Harnik/Getty Images

    Ruoming Pang, a prominent A.I. researcher recruited by Meta last year with a pay package reportedly worth more than $200 million, has left the company to join OpenAI, The Information reported yesterday (Feb. 25). His departure marks another setback for Mark Zuckerberg’s elite A.I. team and underscores the escalating A.I. talent war. Pang joined Meta Superintelligence Labs (MSL) in July after being poached from Apple. He remained at Meta for only seven months.

    Zuckerberg unveiled MSL in July 2025 as the centerpiece of Meta’s push to develop advanced A.I. systems. The lab quickly became the focus of an aggressive—and costly—hiring spree. Alexandr Wang, founder of Scale AI, now leads the group as Meta’s A.I. chief after Meta acquired 40 percent of his startup. Within MSL, a smaller, more secretive unit known as TBD Lab is tasked with building next-generation foundation models.

    Pang was originally from Shanghai and earned his undergraduate degree from Shanghai Jiao Tong University. He holds a master’s in computer science from the University of Southern California and earned a Ph.D. from Princeton University in 2006. Over the course of his career, Pang has worked on some of the most consequential A.I. systems in the industry, making him one of the more sought-after engineers in the field.

    At Apple, he spent nearly four years as a “senior distinguished engineer,” leading development of the foundation models behind Apple Intelligence. Before Apple, Pang spent roughly 15 years at Google DeepMind as a principal software engineer, where he worked on large-scale machine learning systems, including privacy-preserving technologies and speech recognition.

    OpenAI has not disclosed Pang’s title, scope of responsibilities or the terms of his compensation. The Sam Altman-led company reportedly courted him for months, so the package is likely substantial. OpenAI employees earn roughly $1.5 million in annual salary and equity, according to the Wall Street Journal. Pang is widely expected to continue working on foundation models and superintelligence research.

    For Meta, Pang’s exit complicates Zuckerberg’s ambition to dominate the superintelligence race. The company has successfully recruited high-profile researchers from OpenAI, Google and Anthropic. However, MSL has also seen a steady stream of departures in recent months.

    Among the most prominent was Yann LeCun, Meta’s chief A.I. scientist, who exited at the end of last year after more than a decade at the company. LeCun publicly criticized MSL chief Wang’s lack of experience with A.I. research

    Other departures have been quieter but telling. Ethan Knight joined MSL for only a few weeks before moving to OpenAI last August—a stint so brief it never appeared on his LinkedIn profile. Bert Maher, a software engineer, left after 12 years at Meta to join Anthropic. Avi Verma, who had been expected to join Meta from OpenAI, ultimately backed out.

    Pang’s move is the latest signal that Silicon Valley’s A.I. talent war is intensifying. Even as talk of an A.I. bubble grows louder and tech companies rely on increasingly complex financial structures to sustain lofty valuations, leaders like Zuckerberg, Altman and Anthropic’s Dario Amodei show little sign of restraint. Instead, they are offering compensation packages worth tens or even hundreds of millions of dollars to persuade top researchers that their vision for superintelligence will prevail.

    Ruoming Pang, Meta’s $200M Superintelligence Hire, Jumps to OpenAI After Just 7 Months

    [ad_2]

    Rachel Curry

    Source link

  • AI song generator startups angered the music industry. Now they’re hoping to join it

    [ad_1]

    CAMBRIDGE, Mass. — Suno CEO Mikey Shulman pulls up a chair to the recording studio desk where a research scientist at his artificial intelligence company is creating a new song.

    The flute line sounds promising.

    The percussion needs work.

    Neither of them is playing an instrument. They type some descriptive words – Afrobeat, flute, drums, 90 beats per minute – and out comes an infectious rhythm that livens up the 19th century office building where Suno is headquartered in Cambridge, Massachusetts. They toggle some editing tools to refine the new track.

    Much like early experiences with ChatGPT or AI text-to-image generators, trying to make an AI-generated song on platforms like Suno or its rival, Udio, can seem a little like magic. It takes no musical skills, practice or emotional wellspring to conjure up a new tune inspired by almost any of the world’s musical traditions.

    But the process of training AI on beloved musicians of the past and present to produce synthetic approximations of their work has angered the music industry and brought much of its legal power against the two startups.

    Now, after their users have flooded the internet with millions of AI-generated songs, some of which have found themselves on streaming services like Spotify, the leaders of Suno and New York-based Udio are trying to negotiate with record labels to secure a foothold in an industry that shunned them.

    “We have always thought that working together with the music industry instead of against the music industry is the only way that this works,” said Shulman, who co-founded Suno in 2022. “Music is so culturally important that it doesn’t make sense to have an AI world and a non-AI world of music.”

    Sony Music, Universal Music and Warner Records sued the two startups for copyright infringement in 2024, alleging that they were exploiting the recorded works of their artists.

    Since then, the pair have strived to make peace with the industry. Suno, now valued at $2.45 billion, last year struck a settlement with Warner, and Udio has signed licensing agreements with Warner, Universal and independent label Merlin. Only one major label, Sony, has not settled with either startup as the lawsuits move forward in Boston and New York federal courts.

    The first of the settlement deals, between Udio and Universal, led to an exodus of frustrated Udio users who were blocked from downloading their own AI-generated tracks. But Udio CEO Andrew Sanchez said he’s optimistic about what the future will bring as his company adapts its business model to let fans of willing artists use AI to play with and potentially alter their works.

    “Having a close relationship with the music industry is elemental to us,” Sanchez said in an interview. “Users really want to have an anchor to their favorite artists. They want to have an anchor to their favorite songs.”

    Many professional musicians are skeptical. Singer-songwriter Tift Merritt, co-chair of the Artists Rights Alliance, recently helped organize a “Stealing Isn’t Innovation” campaign by artists — including Cyndi Lauper and Bonnie Raitt — to urge AI companies to pursue licensing deals and partnerships rather than build platforms without regard for copyright law.

    “The economy of AI music is built totally on the intellectual property, globally, of musicians everywhere without transparency, consent, or payment. So, I know they value their intellectual property, but ours has been consumed in order to replace us,” Merritt said in an interview in Raleigh, North Carolina.

    Shulman contends technology “evolves very often faster than the law,” and his company tries to be thoughtful about “not breaking the law” but also “deliver products that the world really wants.”

    When the music industry first confronted Suno over alleged copyright infringement, the company’s antagonistic response alienated professionals like Merritt.

    Symbolizing the divide was a clip last year in which Shulman was quoted as saying, “it’s not really enjoyable” to make music most of the time. Shulman started learning piano at age 4 but later dropped it. He took up bass guitar at 12, playing in rock bands in high school and college. He said that experience gave him some of the best moments of his life.

    “You need to get really good at an instrument or really good at a piece of production software,” Shulman said on the “The Twenty Minute VC” podcast. “I think the majority of people don’t enjoy the majority of the time they spend making music.”

    “Clearly, I wish I had said different words,” Shulman told the AP. The context, he added, was that “to produce perfect music takes a lot of repetitions and not all of those minutes are the most enjoyable bits of making music. On the whole, obviously, music is amazing. I play music every day for fun.”

    Sanchez, the Udio CEO, also would like people to know he loves making music. He’s an opera-loving tenor who’s sung in choirs and grew up crooning Luciano Pavarotti in his family’s home in Buffalo, New York.

    Founded in 2023 by a group that included several AI researchers from Google, the startup now employs about 25 people. It has fewer users and raised less capital than Suno, reducing its leverage in its negotiations with record labels.

    But like ride-hailing company Lyft, which pitched itself as the friendly alternative to Uber’s aggressive expansion tactics more than a decade ago, Udio embraces its underdog status.

    “So many tech companies actively cultivate this I-am-a-tech-company-crusader and that’s part of their identity,” Sanchez said. “That alienates people who are creative and I am uniformly opposed to that.”

    Sanchez said he knows not every artist is going to embrace AI, but he hopes those who leave the room after talking with him realize he’s not imposing a kind of “AI bravado.”

    “If you took what we’re doing and pretended that the word AI wasn’t a part of it, people would be like, ‘Oh my gosh. This is so cool.’”

    In the basement office of his Philadelphia, Mississippi home, Christopher “Topher” Townsend is a one-man band, making and marketing Billboard-chart-topping gospel music — none of which he sings himself — and doing it in record time.

    The rapper, whose lyrics reflect his political conservatism, downloaded Suno in October and, within days, created Solomon Ray, a fictional singer that Townsend calls an extension of himself.

    Townsend uses ChatGPT to write lyrics, Suno to generate songs and other AI tools to create cover art and promotional videos under the Solomon Ray name.

    “I can see why artists would be afraid,” Townsend said. ”(Solomon Ray) has an immaculate voice. He doesn’t get sick. You know, he doesn’t have to take leave, he doesn’t get injured and he can work faster than I can work.”

    Trying to dispel that fear for aspiring artists is Jonathan Wyner, a professor of music production and engineering at the Berklee College of Music in Boston, who sees generative AI as just another tool.

    “To the creative musician, AI represents both enormous potential benefits in terms of streamlining things and frankly making kinds of music-making possible that weren’t possible before, and making it more accessible to people who want to make music,” he said.

    Such a vision remains a tough sell for artists who feel their work has already been exploited. Merritt says she’s particularly concerned about labels making deals with AI companies that leave out independent artists.

    Neither Sanchez nor Shulman was invited to the Grammy Awards in February, but both spent time schmoozing at the sidelines of the event.

    “I think AI music is still officially not allowed, and my hope is that some of these rules change over the next year, and then maybe the 2027 Grammys, I’ll get an invite,” Shulman said.

    —————-

    O’Brien reported from Cambridge, Massachusetts and New York. Ngowi reported from Cambridge and Somerville, Massachusetts. AP journalists Sophie Bates in Philadelphia, Mississippi and Allen G. Breed in Raleigh, North Carolina, contributed to this report.

    [ad_2]

    Source link

  • New York sues ‘Counter-Strike’ game developer saying ‘loot boxes’ promote gambling

    [ad_1]

    NEW YORK — New York’s attorney general has sued video game developer Valve, claiming the “loot boxes” found in Counter-Strike and other popular video game franchises illegally promote gambling.

    State Attorney General Letitia James said in a lawsuit filed Wednesday in New York state court that games such as Counter-Strike 2, Team Fortress 2 and Dota 2 illegally charge users for the chance to win rare items held in the virtual containers.

    In Counter-Strike, the process even resembles a slot machine, with an animated spinning wheel that eventually rests on a selected item, James’ office said.

    “Valve has made billions of dollars by letting children and adults alike illegally gamble for the chance to win valuable virtual prizes,” James said in a statement. “These features are addictive, harmful, and illegal.”

    Messages seeking comment were left Wednesday for the Bellevue, Washington-based company.

    “Loot box” items are generally cosmetic, such as a hat for a player’s character or an artistic skin for weapons. They usually don’t serve any vital function in the games, but James’ office said the items can still be sold online for significant sums.

    Some of the rarest items can go for thousands of dollars online, according to James’ office. One item, an AK-47 Counter-Strike skin, recently sold for more than $1 million.

    James’ suit says Valve is violating New York’s constitution by promoting gambling in its games. It wants the company to stop the practice and pay restitution and damages to users, as well as a fine worth three times the amount of its profits from the features.

    The attorney general argues that research has found children introduced to gambling are four times more likely to develop a gambling problem later in life than those who are not.

    “Loot boxes, like other forms of gambling, can lead to addiction and result in real harm,” the suit reads. “But Valve’s loot boxes are particularly pernicious because they are popular among children and adolescents, who are lured into opening loot boxes by the prospect of winning expensive virtual items that convey status in the gaming world.”

    James’ office said demand for “loot box” prizes has drawn interest not just from online speculators and investors that have helped values soar, but also thieves targeting third-party, online marketplaces where the virtual items can be sold for cash.

    Valve facilitates those third-party marketplaces, as well as operating its own, the Steam Community Market, where players can sell their items and use the proceeds to buy other video games, gaming hardware or other virtual items.

    [ad_2]

    Source link

  • Feds give record $27B in loans for utility expansion in Georgia and Alabama

    [ad_1]

    ATLANTA — Federal energy officials on Wednesday announced a record $27 billion loan to electric utilities in Georgia and Alabama, saying the loan will save customers money as the companies undertake a huge expansion driven by demand from computer data centers.

    A total of $22.4 billion will go to Georgia Power and $4.1 billion to Alabama Power. Both are subsidiaries of Atlanta-based Southern Company, one of the nation’s largest utilities. The companies plan to use the cash to build new natural-gas fueled power plants, build new transmission lines and upgrade existing power plants.

    Energy Secretary Chris Wright said the loan will result in more than $7 billion in savings over decades from a lower, federally subsidized interest rate.

    “We’re focused on driving down costs,” Wright said. He added that the loan would help ensure Southern customers “have access to affordable, reliable and secure energy for decades to come.”

    Wright and President Donald Trump have frequently made the case for their fossil fuel-friendly policies — including orders over the past nine months to keep some coal-fired plants open past planned retirement dates — as necessary to ensure reliability of the nation’s electric grid.

    Wright says the orders have saved utility customers millions of dollars and helped keep lights on during last month’s winter storm. Critics say the orders are unnecessary and have raised electric bills as utilities keep older, more expensive plants operating.

    “These loans will help lower the cost of investments in our grid that will enhance reliability and resilience for the benefit of our customers,” said Chris Womack, Southern’s chairman, president and CEO.

    The new loan comes amid scrutiny on rising utility bills, with electricity prices increasing faster than inflation in many states. There is also widespread opposition to new data centers for artificial intelligence.

    Trump in his State of the Union Tuesday announced a “ratepayer protection pledge” against higher utility bills tied to AI. He said tech companies will provide their own power as they build data centers. Trump didn’t provide details but claimed prices will go down.

    It is unclear whether any tech companies have signed pledges to build their own power plants, but Wright said on a call with reporters Wednesday that “every name you know that’s developing a data center has been in dialogue with us.”

    He cited “cooperation” from giants such as Microsoft, Google and Meta, but he didn’t specify any written agreements.

    Federal officials have long given utility loans, including $12 billion in loans that the first Trump administration and President Barack Obama’s administration guaranteed for two costly nuclear reactors at Georgia’s Plant Vogtle, partially owned by Georgia Power.

    Trump’s tax and budget bill last year reshaped the loan program to focus on increasing capacity to generate and transmit electricity. Loan guarantees under President Joe Biden focused on green energy goals.

    Gregory Beard, who directs the newly renamed Office of Energy Dominance Financing, said Wednesday that cutting interest rates and discarding Biden’s policy “will get us back on the right track in terms of affordability.”

    The loan office will review individual projects to ensure they’re financially viable, he said. “We’re not going to build this plant or deploy this capital until we are sure that it’s the right thing to do for the local community, for the local ratepayer,” Beard said in an interview.

    Those requirements don’t seem to be laid out in loan agreements that Southern released Wednesday. Jennifer Whitfield, an attorney for the Southern Environmental Law Center who represented Georgia Power expansion opponents, said the loans will save money for Georgians, but questioned their wisdom.

    “As a taxpayer, it’s hard to avoid the fact that this is a bailout paid for by every taxpaying citizen of the United States,” she said.

    Any savings for customers must be approved by the elected Public Service Commissions in Alabama and Georgia. Commissioners last July approved a three-year rate freeze requested by Georgia Power, while commissioners in Alabama approved a two-year rate freeze in December. Company officials tout the freezes when utilities nationwide have been seeking record increases. But opponents complain company-friendly regulators locked in high prices and high utility profits.

    Voters booted two Republican incumbents off the Georgia commission in November amid complaints about rising bills.

    Commissioner Peter Hubbard, one of two new Democrats, unsuccessfully tried to roll back approval for Georgia Power’s expansion in recent weeks. He said Wednesday that the declining costs of solar, wind and battery power could make new natural gas plants uneconomic over time.

    “It’s locking us into a costlier option,” he said of the federal loan. ”And so I think it just is not meeting the moment of affordability.”

    ___

    Daly reported from Washington.

    [ad_2]

    Source link

  • NASA moves its Artemis II moon rocket off launch pad for more repairs

    [ad_1]

    CAPE CANAVERAL, Fla. — NASA moved its grounded Artemis moon rocket from the launch pad back to its hangar Wednesday for more repairs.

    The slow-motion trek at Florida’s Kennedy Space Center was expected to take all day. The 322-foot (98-meter) Space Launch System rocket had spent a month at the pad ready for potential liftoff, but encountered a series of problems serious enough to require a return to the Vehicle Assembly Building, about 4 miles (6.4 kilometers) away.

    Managers ordered the rollback over the weekend after the rocket’s helium pressurization system malfunctioned. Already delayed a month by hydrogen fuel leaks, the launch team had been targeting March for astronauts’ first trip to the moon in decades. But now the Artemis II lunar fly-around by a U.S.-Canadian crew is off until at least April.

    All four astronauts were at the U.S. Capitol on Tuesday night for President Donald Trump’s State of the Union address as invited guests, since the flight delay means they no longer need to quarantine.

    ___

    The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

    [ad_2]

    Source link

  • Rechat now integrated with Canva – Houston Agent Magazine

    [ad_1]

    Rechat now provides listing data to Canva, providing users with access to high-resolution listing photos, agent information and property descriptions directly in Canva’s design platform.

    Additionally, Canva designs can now be added directly to Rechat with one click, Rechat said in a press release, further consolidating agent workflow.

    “Real estate professionals need to move at the speed of the market without sacrificing quality,” said Chris Hadges, head of Canva for Real Estate. “By integrating with Rechat, we are empowering agents to turn live property data into polished, on-brand marketing materials in minutes.”

    Canva users can integrate Rechat by adding the Rechat app to their account, logging in and allowing listing access.

    “This is about removing friction from the creative process and meeting agents where they want to work,” said Rechat Founder and CEO Shayan Hamidi. “By allowing our users to send Rechat’s listing data directly into Canva, we’re giving them the ability to create high quality marketing assets instantly.”

    Rechat, a unified operating system for real estate agents, has existing partnerships with tech companies including SkySlope and Follow Up Boss.

    [ad_2]

    Emily Marek

    Source link

  • Crew-11 astronaut with mission-ending medical issue identifies self

    [ad_1]

    CAPE CANAVERAL SPACE FORCE STATION — In a prepared statement, NASA astronaut Mike Fincke revealed that it was he who suffered a medical issue onboard the International Space Station that resulted in the Crew-11 mission being cut short.


    What You Need To Know

    • NASA astronaut Mike Fincke thanked his fellow astronauts and NASA’s medical team after he suffered a medical issue onboard the International Space Station
    • It is not know what type of medical issue he suffered while onboard
    • 🔻Scroll down to read his full statement🔻

    The 58-year-old retired U.S. Air Force colonel recapped and thanked his fellow astronauts and NASA flight surgeons when he experienced his medical issue, which he did not reveal what that was.

    “On Jan. 7, while aboard the International Space Station, I experienced a medical event that required immediate attention from my incredible crewmates. Thanks to their quick response and the guidance of our NASA flight surgeons, my status quickly stabilized,” he wrote.  

    Fincke, who was the pilot of Crew-11, and Cmdr. Zena Cardman were scheduled to conduct a six-hour spacewalk the following day, where the pair were going to install a modification kit and cables for a future rollout of a solar array.

    That did not happen.

    The Crew-11 mission was cut short and splashed down back to Earth this past January, a month earlier than when the mission was supposed to end.

    During a press conference, NASA Administrator Jared Isaacman only revealed that an unnamed astronaut suffered a “serious medical condition” while onboard the space station.

    Even during a separate press conference with the Crew-11 members, no one revealed the identity of the astronaut or what the medical episode was.

    Fincke was selected to be a NASA astronaut in 1996. The Pennsylvania native is a veteran astronaut, logging 549 days in space with nine spacewalks.

    In his words

    “On Jan. 7, while aboard the International Space Station, I experienced a medical event that required immediate attention from my incredible crewmates. Thanks to their quick response and the guidance of our NASA flight surgeons, my status quickly stabilized.

    After further evaluation, NASA determined the safest course was an early return for Crew-11—not an emergency, but a carefully coordinated plan to be able to take advantage of advanced medical imaging not available on the space station. On Jan. 15, we splashed down off the coast of San Diego after an amazing five-and-a-half-month mission.

    I am deeply grateful to my fellow Expedition 74 members—Zena Cardman, Kimiya Yui, Oleg Platonov, Chris Williams, Sergey Kud-Sverchkov, and Sergei Mikayev—as well as the entire NASA team, SpaceX, and the medical professionals at Scripps Memorial Hospital La Jolla near San Diego. Their professionalism and dedication ensured a positive outcome.

    I’m doing very well and continuing standard post-flight reconditioning at NASA’s Johnson Space Center in Houston. Spaceflight is an incredible privilege, and sometimes it reminds us just how human we are. Thank you all for your support.”

    [ad_2]

    Anthony Leone

    Source link

  • Hegseth demands full military access to Anthropic’s AI model Claude and sets deadline for end of week

    [ad_1]

    Trust is breaking down between the Pentagon and Anthropic over the use of its AI model, sources familiar with the situation told CBS News. 

    In a meeting at the Pentagon on Tuesday morning, Defense Secretary Pete Hegseth gave Anthropic’s CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model, the sources said. 

    Officials are considering invoking the Defense Production Act to make Anthropic adhere to what the military is seeking, they said. Axios reported earlier on some of what transpired in the meeting.

    Defense officials want full control of Anthropic’s AI technology for use in its military operations, sources told CBS News. The company was awarded a $200 million contract by the Pentagon in July to develop AI capabilities that would advance U.S. national security.

    Anthropic has repeatedly asked the Defense Department to agree to guardrails that would restrict the AI model, called Claude, from conducting mass surveillance of Americans, sources said. Defense officials noted that that’s illegal and said the military is simply asking for a license to use the AI strictly for lawful activities.


    The Free Press: Are We at an AI Precipice?


    Amodei also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the meeting said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the person said. 

    But when asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

    The official said Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close.

    In Tuesday’s meeting, Hegseth told Amodei that when the government purchases Boeing planes, the aerospace company has no say in how the Pentagon uses the planes. He argued the same should be true for the military’s use of Claude.

    After Amodei left, officials discussed whether to use the Defense Production Act in this situation, which enables the government to exert control over domestic industries. 

    But because officials say they aren’t sure the government can trust Anthropic at this point, the Pentagon may decide to officially designate the company as a “supply chain risk” to push them out of government, two sources said. Anthropic was the first tech company authorized to work on the military’s classified networks. 

    An Anthropic spokesperson said in a statement, “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”

    Hegseth gave Anthropic a deadline of 5 p.m. Friday.

    [ad_2]

    Source link

  • Waymo robotaxis being dispatched in 10 U.S. markets with expansion in Texas, Florida

    [ad_1]

    Waymo will begin dispatching its robotaxis in four more cities in Texas and Florida, expanding the territory covered by its fleet of self-driving cars to 10 major U.S. metropolitan markets.

    The move into Dallas, Houston, San Antonio and Orlando, Florida, announced Tuesday, widens Waymo’s early lead in autonomous driving while rival services from Tesla and the Amazon-owned Zoox are still testing their vehicles in only a few U.S. cities.

    In contrast, Waymo’s robotaxis already provide more than 400,000 weekly trips in the six metropolitan areas where they have been transporting passengers: Phoenix, the San Francisco Bay Area, Los Angeles, Miami, Atlanta, and Austin, Texas.

    Waymo operates its ride-hailing service through its own app in all the U.S. cities except Atlanta and Austin, where its robotaxis can only be summoned through Uber’s ride-hailing service.

    The expansion into four more markets marks a significant step toward Waymo’s goal to surpass 1 million weekly paid trips by the end of 2026. Without identifying where its robotaxis will be available next, Waymo is targeting a list of eight other cities that include Las Vegas, Washington, Detroit and Boston while signaling its first overseas availability is likely to be London.

    To help pay for more robotaxis, Waymo recently raised $16 billion as part of the financial infusion that puts the value of the company at $126 billion. The valuation fueled speculation that Waymo may eventually be spun off from its corporate parent Alphabet, where it began as a secret project within Google in 2009.

    Although Waymo is opening up in four more cities, its robotaxis initially will only be made available to a limited number of people with its ride-hailing app in Dallas, Houston, San Antonio and Orlando before the service will be available to all comers in those markets.

    [ad_2]

    Source link

  • Teens are using AI frequently in their daily lives, and many parents aren’t aware, survey finds

    [ad_1]

    Parents are often caught off guard by what their teens are doing in daily life — and when it comes to AI, the “perception gap” might be larger than they thought, according to a Pew Research Center survey released Tuesday.

    The survey found a significant gap exists between parents’ perceptions and their teens’ actual use of AI chatbots. About 64% of U.S. teens reported using AI chatbots, while 51% of parents said their teens use them. 

    “Technology is not just a teen issue or a parent issue — it’s a family issue,” said Pew senior researcher Colleen McClain. She said researchers surveyed both teens and parents and heard different perspectives on managing AI usage. 

    Just over half (54%) of the teens surveyed said they’ve used AI chatbots for help with schoolwork, while about 1 in 10 said they’ve gotten emotional support from an AI chatbot.

    Teens, often at the forefront as users of new technology, told researchers they see AI as a tool in their daily lives, and they were more positive than negative in their views of about how AI will impact them personally.


    The Free Press: Are We at an AI Precipice?


    Parents have a “lot to juggle,” McClain said, and many are concerned about their children’s use of AI chatbots — especially after several high-profile cases in which teens died by suicide after prolonged interactions with the new technology. 

    “It’s complicated, it’s nuanced, it’s not a one-size-fits-all,” McClain said. 

    She said the survey — the most in-depth yet on teens and AI — found many parents don’t speak to their teens about their AI usage; just 4 in 10 parents said they do. Many don’t make managing screen time their first priority amid other life demands, and some parents said they feel judged for doing so. 

    Dr. Amber W. Childs, an associate professor of psychiatry at the Yale School of Medicine, told CBS News the question shouldn’t be if teens are using AI but how they are using the technology. 

    She said most teens are using technology for mundane daily tasks but parents need to know if “they’re using it in the absence of other sources of connection or coping skills and support.” Around 12% said they’ve gotten emotional support through chatbots, and Childs said teens using the tech for sole emotional support is concerning.

    Psychologist Joshua Goodman, an associate professor at Southern Oregon University, said teens who don’t feel comfortable talking to parents or others about their sexuality or orientation might feel more comfortable speaking to AI about their sexual health. These teens are “not reaching out for support” from adults in their lives, but it’s not necessarily a bad thing, Goodman said. 

    He said parents need to look for warning signs around teens constantly using AI and the technology replacing their critical thinking, or if they are showing signs of depression.

    “You want to get curious,” Childs said, “but you also want to be communicating to connect.” She cautioned parents not to just pass down information and warnings to their teens, but to use the conversation to understand how AI is being used in their lives. Parents can set up boundaries and expectations around the usage of the technology that align with family expectations, she said. 

    She said most teens are probably using AI to improve their life skills, like learning new languages or doing schoolwork.

    About a quarter of teens surveyed said chatbots have been extremely or very helpful for completing their schoolwork, while another 25% say they’ve been somewhat helpful. Most said they use the technology for research or help with math problems. 

    About 1 in 10 teens said they do all or most of their schoolwork with chatbots’ help. 

    More than half of teens say they’ve used chatbots to search for information and almost half say they’ve done so for fun or entertainment. 

    Some, however, are wary about the way the technology will affect their lives. One teenage boy told Pew, “It’s already being used to spread propaganda, there’s no end to what it can do, it’s hard to tell what’s real or AI online anymore.”

    Pew surveyed 1,458 U.S. teens and their parents from Sept. 25 to Oct. 9, 2025.

    [ad_2]

    Source link

  • Reddit hit with $20 million UK data privacy fine over child safety failings

    [ad_1]

    LONDON — Britain’s data privacy watchdog slapped online forum Reddit on Tuesday with a fine worth nearly $20 million for failures involving children’s personal information.

    The Information Commissioner’s Office said it issued the penalty worth 14.5 million pounds ($19.5 million) because the failures resulted in the platform using children’s data “unlawfully.”

    “Children under 13 had their personal information collected and used in ways they could not understand, consent to or control. That left them potentially exposed to content they should not have seen,” said Information Commissioner John Edwards. “This is unacceptable and has resulted in today’s fine.”

    The U.K. privacy regulator has been escalating scrutiny of online platforms over child safety. Earlier this month it hit MediaLab, owner of image-sharing site Imgur, with a 247,590 pound fine over similar failures and it has also been investigating TikTok since last year.

    The watchdog took issue with Reddit’s age verification measures. It said that even though the platform doesn’t allow children under 13 to use its service, it didn’t have any way to check the ages of its users before July 2025.

    Edwards said online platforms that are likely to be accessed by children are responsible for protecting them by making sure they’re not exposed to any risks “through the way their data is used.” They can do this with “effective age assurance measures,” he said.

    Reddit rolled out age verification measures in July 2025 in order for users to access mature content, including asking them to declare their age when setting up an account.

    But the watchdog said “self-declaration” is easy to bypass and that it told Reddit it would continue to monitor the platform’s handling of children’s data.

    Reddit said it would appeal the decision.

    “Reddit doesn’t require users to share information about their identities, regardless of age, because we are deeply committed to their privacy and safety,” the company said in a statement. “The ICO’s insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users’ online privacy and safety.”

    [ad_2]

    Source link

  • NASA to roll back its Artemis II moon rocket for repairs

    [ad_1]

    KENNEDY SPACE CENTER — On Wednesday morning, NASA will begin its Artemis II moon rocket’s long march back to the Vehicle Assembly Building to begin repairs after a helium flow issue forced the cancellation of another launch attempt.


    What You Need To Know

    • NASA will use a crawler transporter to move the Space Launch System rocket and its Orion capsule back to the Vehicle Assembly Building
    • This can take up to 12 hours to move about four miles from the launch pad to the Vehicle Assembly Building
    • The reason behind the rollback is because a helium flow issue was discovered

    On Wednesday at around 9 a.m. ET, NASA will use a crawler transporter to move the 322-foot (98.27-meter) Space Launch System rocket and its Orion capsule companion back to the Vehicle Assembly Building, the U.S. space agency stated.

    This can take up to 12 hours as it will make its approximately 4-mile journey to the Vehicle Assembly Building, as the crawler transporter will burn rubber at about 1 mile per hour or less.

    The reason behind the trek is due to a helium flow issue that came up over the weekend.

    “Once back in the VAB, teams will immediately begin work to install platforms to access the area of the helium flow issue. Teams also will take advantage of the time in the VAB to replace batteries in the flight termination system and retest it, and replace additional batteries in the upper stage,” NASA shared.

    The Artemis II moon rocket will be rolled back to NASA’s Vehicle Assembly Building for repair work. (Spectrum News file photo/Anthony Leone)

    The Artemis II rocket has had some issues since it was first rolled to its temporary home at Launch Pad 39B at NASA’s Kennedy Space Center in January.

    During the first wet dress rehearsal — or a prelaunch test — NASA teams filled more than 700,000 gallons of cryogenic fuel into the rocket, but they discovered a liquid hydrogen leak, among other issues.

    The teams replaced the seals where the leak was discovered, near the rocket’s tail service mast umbilical interface.

    In the second wet dress rehearsal, the new seals worked fine and all looked good.

    However, over the weekend, NASA Administrator Jared Isaacman announced on X that a helium flow issue was discovered, and the massive moon rocket would need to be rolled back to the Vehicle Assembly Building for repairs.

    Hurricane Ian forced NASA to roll the rocket back into the Vehicle Assembly Building during the Artemis I mission in 2022.

    Originally, the Artemis II was going to be launched in February, until the leak pushed that back, and the next attempt was going to be in March.

    Now, the possible next launch attempt will be in April.

    When all is ready, NASA’s Cmdr. Gregory Reid Wiseman, pilot Victor Glover, mission specialist Christina Koch and Canadian Space Agency astronaut mission specialist Jeremy Hansen will be launched on a flyby mission to the moon.

    Artemis II possible launch dates

     

    [ad_2]

    Anthony Leone

    Source link

  • SpaceX launches Starlink satellites into nice skies

    [ad_1]

    CAPE CANAVERAL SPACE FORCE STATION — The weather was mighty fine for a Tuesday evening Starlink launch. 


    What You Need To Know

    • The Falcon 9 rocket sent up Starlink 6-110 mission from Space Launch Complex 40

    The Falcon 9 rocket sent up Starlink 6-110 mission from Space Launch Complex 40, Cape Canaveral Space Force Station, stated SpaceX

    The launch window opened at 3:56 p.m. ET and was set to close at 7:56 p.m. ET, which meant SpaceX had during that time period to send up the Starlink company’s satellites.

    The liftoff time was 6:04 p.m. ET.

    The 45th Weather Squadron gave a 95% chance of good liftoff conditions, with no forecast restrictions against the launch.

    Find out more about the weather criteria for a Falcon 9 launch.

    Double Digits

    This is the 10th mission for the Falcon 9’s first-stage booster B1092.

    Its previous missions include:

    1. Starlink 12-13 mission
    2. NROL-69 mission
    3. Bandwagon-3 mission
    4. GPS III-7 mission
    5. Starlink 10-34 mission
    6. USSF-36 mission
    7. Starlink 10-61 mission
    8. Starlink 6-89 mission
    9. Starlink 6-82 mission

    After the stage separation, the first-stage rocket landed on the droneship Just Read the Instructions that is in the Atlantic Ocean.

    About the mission

    The 29 satellites will be heading to low-Earth orbit to join the thousands already there.

    Once deployed and in their orbit, they will provide internet service to many parts of Earth.

    SpaceX owns the Starlink company.

    Dr. Jonathan McDowell, of Harvard-Smithsonian Center for Astrophysics, has been recording Starlink satellites.

    Before this launch, McDowell recorded the following:

    • 9,779 are in orbit
    • 8,436 are in operational orbit

    [ad_2]

    Anthony Leone

    Source link

  • Hegseth and Anthropic CEO set to meet as debate intensifies

    [ad_1]

    WASHINGTON — Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network.

    Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

    The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.

    It underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a “woke culture” in the armed forces.

    “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in an essay last month.

    The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.

    Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.

    By early this year, Hegseth was highlighting only two of them: xAI and Google.

    The defense secretary said in a January speech at Musk’s space flight company, SpaceX, in South Texas that he was shrugging off any AI models “that won’t allow you to fight wars.”

    Hegseth said his vision for military AI systems means that they operate “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s “AI will not be woke.”

    In January, Hegseth said Musk’s artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok — which is embedded into X, the social media network owned by Musk — drew global scrutiny for generating highly sexualized deepfake images of people without their consent.

    OpenAI announced in early February that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.

    Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

    The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.

    “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications,” Owens said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

    In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden’s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

    Amodei, the CEO, has warned of AI’s potentially catastrophic dangers while rejecting the label that he’s an AI “doomer.” He argued in the January essay that “we are considerably closer to real danger in 2026 than we were in 2023″ but that those risks should be managed in a “realistic, pragmatic manner.”

    This would not be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump’s proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia.

    The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.

    Trump’s top AI adviser, David Sacks, accused Anthropic in October of “running a sophisticated regulatory capture strategy based on fear-mongering.”

    Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.

    Anthropic hired a number of ex-Biden officials soon after Trump’s return to the White House, but it’s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

    The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies’ participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon’s reliance on drone surveillance has only increased.

    Similarly, “the use of AI in military contexts is already a reality and it is not going away,” Owens said.

    “Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,” he said, referring to the use of lethal force or weapons like nuclear arms. “Military users are aware of these risks and have been thinking about mitigation for almost a decade.”

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • Hegseth and Anthropic CEO set to meet as debate intensifies over the military’s use of AI

    [ad_1]

    WASHINGTON — Defense Secretary Pete Hegseth plans to meet Tuesday with the CEO of Anthropic, with the artificial intelligence company the only one of its peers to not supply its technology to a new U.S. military internal network.

    Anthropic, maker of the chatbot Claude, declined to comment on the meeting but CEO Dario Amodei has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.

    The meeting between Hegseth and Amodei was confirmed by a defense official who was not authorized to comment publicly and spoke on condition of anonymity.

    It underscores the debate over AI’s role in national security and concerns about how the technology could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. It also comes as Hegseth has vowed to root out what he calls a “woke culture” in the armed forces.

    “A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow,” Amodei wrote in an essay last month.

    The Pentagon announced last summer that it was awarding defense contracts to four AI companies — Anthropic, Google, OpenAI and Elon Musk’s xAI. Each contract is worth up to $200 million.

    Anthropic was the first AI company to get approved for classified military networks, where it works with partners like Palantir. The other three companies, for now, are only operating in unclassified environments.

    By early this year, Hegseth was highlighting only two of them: xAI and Google.

    The defense secretary said in a January speech at Musk’s space flight company, SpaceX, in South Texas that he was shrugging off any AI models “that won’t allow you to fight wars.”

    Hegseth said his vision for military AI systems means that they operate “without ideological constraints that limit lawful military applications,” before adding that the Pentagon’s “AI will not be woke.”

    In January, Hegseth said Musk’s artificial intelligence chatbot Grok would join the Pentagon network, called GenAI.mil. The announcement came days after Grok — which is embedded into X, the social media network owned by Musk — drew global scrutiny for generating highly sexualized deepfake images of people without their consent.

    OpenAI announced in early February that it, too, would join the military’s secure AI platform, enabling service members to use a custom version of ChatGPT for unclassified tasks.

    Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.

    The uncertainty with the Pentagon is putting those intentions to the test, according to Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technology.

    “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications,” Owens said. “So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”

    In the AI craze that followed the release of ChatGPT, Anthropic closely aligned with President Joe Biden’s administration in volunteering to subject its AI systems to third-party scrutiny to guard against national security risks.

    Amodei, the CEO, has warned of AI’s potentially catastrophic dangers while rejecting the label that he’s an AI “doomer.” He argued in the January essay that “we are considerably closer to real danger in 2026 than we were in 2023″ but that those risks should be managed in a “realistic, pragmatic manner.”

    This would not be the first time Anthropic’s advocacy for stricter AI safeguards has put it at odds with the Trump administration. Anthropic needled chipmaker Nvidia publicly, criticizing Trump’s proposals to loosen export controls to enable some AI computer chips to be sold in China. The AI company, however, remains a close partner with Nvidia.

    The Trump administration and Anthropic also have been on opposite sides of a lobbying push to regulate AI in U.S. states.

    Trump’s top AI adviser, David Sacks, accused Anthropic in October of “running a sophisticated regulatory capture strategy based on fear-mongering.”

    Sacks made the remarks on X in response to an Anthropic co-founder, Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” about the steady march toward more capable AI systems.

    Anthropic hired a number of ex-Biden officials soon after Trump’s return to the White House, but it’s also tried to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official from Trump’s first term, to its board of directors.

    The Pentagon-Anthropic debate is reminiscent of an uproar several years ago when some tech workers objected to their companies’ participation in Project Maven, a Pentagon drone surveillance program. While some workers quit over the project and Google itself dropped out, the Pentagon’s reliance on drone surveillance has only increased.

    Similarly, “the use of AI in military contexts is already a reality and it is not going away,” Owens said.

    “Some contexts are lower stakes, including for back-office work, but battlefield deployments of AI entail different, higher-stakes risks,” he said, referring to the use of lethal force or weapons like nuclear arms. “Military users are aware of these risks and have been thinking about mitigation for almost a decade.”

    ___

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link