[ad_1]
Delve into how ChatGPT can save over $100,00, and find out what to do about copyright issues of AI-generated content.
[ad_2]
Ben Angel
Source link

[ad_1]
Delve into how ChatGPT can save over $100,00, and find out what to do about copyright issues of AI-generated content.
[ad_2]
Ben Angel
Source link

[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
The list of Wikipedia‘s top 25 most-viewed pages in 2023 is out — charting the curiosity of the internet and serving as a barometer of the world’s shared interests and concerns.
OpenAI‘s ChatGPT took the spotlight with an impressive 49.4 million page views (out of more than 84 billion total views), according to the nonprofit Wikimedia Foundation.
The wildly popular AI-driven chatbot set a new record for user base growth this year, attracting a whopping 100 million active users in January alone. The triumph of ChatGPT underlines a broader tech revolution, with companies investing billions into AI development and chip-making to power these future innovations.
Earlier this year, OpenAI CEO Sam Altman told ABC News that AI might be “the greatest technology humanity has yet developed,” but that it also comes with real dangers. “We’ve got to be careful here,” he said. “I think people should be happy that we are a little bit scared of this.”
Wikipedia’s list also reflected the substantial cultural and digital impact of Indian audiences, CNN reported. Among the top five were entries related to the 2023 Cricket World Cup and the Indian Premier League, resonating with the sport’s massive fan base in India. Indian cinema made an impact too, with Bollywood action movies Jawan and Pathaan outperforming American blockbusters such as Barbie and Avatar: The Way of Water in Wikipedia page views.
Image Credit: AaronP/Bauer-Griffin | Getty Images
Not to be overshadowed, subjects on influential personalities continued to captivate readers, from Taylor Swift‘s latest music conquests to Elon Musk‘s headline-making maneuvers, revealing the evergreen interest in celebrities and icons. The list was peppered with sports icons, acclaimed movies and pivotal global events, with the Russian invasion of Ukraine and controversial figure Andrew Tate rounding out the entries.
Image Credit: Slaven Vlasic | Getty Images
Here’s the list of 25, ranked by number of page views:
1. ChatGPT, 49,490,406 page views
2. Deaths in 2023, 42,666,860
3. 2023 Cricket World Cup, 38,171,653
4. Indian Premier League, 32,012,810
5. Oppenheimer (film), 28,348,248
6. Cricket World Cup, 25,961,417
7. J. Robert Oppenheimer, 25,672,469
8. Jawan (film), 21,791,126
9. 2023 Indian Premier League, 20,694,974
10. Pathaan (film), 19,932,509
11. The Last of Us (TV series), 19,791,789
12. Taylor Swift, 19,418,385
13. Barbie (film), 18,051,077
14. Cristiano Ronaldo, 17,492,537
15. Lionel Messi, 16,623,630
16. Premier League, 16,604,669
17. Matthew Perry, 16,454,666
18. United States, 16,240,461
19. Elon Musk, 14,370,395
20. Avatar: The Way of Water, 14,303,116
21. India, 13,850,178
22. Lisa Marie Presley, 13,764,007
23. Guardians of the Galaxy Vol. 3, 13,392,917
24. Russian invasion of Ukraine, 12,798,866
25. Andrew Tate, 12,728,616
[ad_2]
Amanda Breen
Source link

[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
Elon Musk played a big role in persuading Ilya Sutskever to join OpenAI as chief scientist in 2015. Now the Tesla CEO wants to know what he saw there that scared him so much.
Sutskever, whom Musk recently described as a “good human” with a “good heart”—and the “linchpin for OpenAI being successful”—served on the OpenAI board that fired CEO Sam Altman two Fridays ago; indeed, Sutskever informed Altman of his dismissal. Since then, however, the board has been revamped and Altman reinstated, with investors led by Microsoft pushing for the changes.
Sutskever himself backtracked on Monday, writing on X, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI.”
But Musk and other tech elites—including ones who mocked the board for firing Altman—are still curious about what Sutskever saw.
Late on Thursday, venture capitalist Marc Andreessen, who has ridiculed “doomers” who fear AI’s threat to humanity, posted to X, “Seriously though — what did Ilya see?” Musk replied a few hours later, “Yeah! Something scared Ilya enough to want to fire Sam. What was it?”
That remains a mystery. The board gave only vague reasons for firing Atlman. Not much has been revealed since.
OpenAI’s mission is to develop artificial general intelligence (AGI) and ensure it “benefits all of humanity.” AGI refers to a system that can match humans when faced with an unfamiliar task.
OpenAI’s unusual corporate structure put a nonprofit board higher than the capped-profit company, allowing the board to fire the CEO if, for instance, it felt the commercialization of potentially dangerous AI capabilities was moving at an unsafe speed.
Early on Thursday, Reuters reported that several OpenAI researchers had warned the board in a letter of a new AI that could threaten humanity. OpenAI, after being contacted by Reuters, then wrote an internal email acknowledging a project called Q* (pronounced Q-Star), which some staffers felt might be a breakthrough in the company’s AGI quest. Q* reportedly can ace basic mathematical tests, suggesting an ability to reason, as opposed ChatGPT’s more predictive behavior.
Musk has longed warned of the potential dangers to humanity from artificial intelligence, though he also sees its upsides and now offers a ChatGPT rival called Grok through his startup xAI. He cofounded OpenAI in 2015 and helped lure key talent including Sutskever, but he left a few years later on a sour note. He later complained that the onetime nonprofit—which he had hoped would serve as a counterweight to Google’s AI dominance—had instead become a “closed source, maximum-profit company effectively controlled by Microsoft.”
Last weekend, he weighed in on the OpenAI board’s decision to fire Altman, writing: “Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action.”
When an X user suggested there might be a “bombshell variable” unknown to the public, Musk replied, “Exactly.”
Sutskever, after his backtracking on Monday, responded to the return of Altman by writing on Wednesday, “There exists no sentence in any language that conveys how happy I am.”
[ad_2]
Steve Mollman
Source link

[ad_1]
Gamers are a passionate bunch, and we’re no exception. These are the week’s most interesting perspectives on the wild, wonderful, and sometimes weird world of video game news.
Scott Pilgrim Takes Off, the new animated series based on Bryan Lee O’Malley’s graphic novels, is out on Netflix. The eight-episode series reunites the voice cast of the 2010 live-action movie Scott Pilgrim vs. the World and is a hilarious blend of the series’ quick wit and well-measured pop culture references. All of this sounds like a recipe for success, right? Well, it’s a little more complicated. Read More

Ubisoft’s new The Division game isn’t even out yet, as it’s still in beta testing and won’t launch officially until 2024. But after trying the beta, I already want one feature from the upcoming game to become standard in every video game I play in the future. Read More

OpenAI is the research organization behind ChatGPT, the AI-generated chatbot that took the internet by storm last year for its capacity to have really weird conversations with tech journalists. It’s at the center of Microsoft’s big bet on generative AI tools transforming the world, gaming, and more, and it’s now at risk of imploding after its CEO, Sam Altman, was mysteriously ousted by the OpenAI board of directors and Twitch co-founder Emmett Shear was desperately recruited to replace him. Here’s all you really need to know about OpenAI to appreciate what a clusterfuck the last few days have been. Read More

How much time has to pass before it becomes acceptable to remaster or even remake a game? 10 years? 15 years? What about three-ish years? Is that enough time between the original and the remaster? Well, that’s what’s happening early next year as Naughty Dog is remastering 2020’s The Last of Us Part II. Read More

Another day, another big video game crossover. This time it’s Bungie’s online looter shooter, Destiny 2, adding Witcher 3-inspired armor to its digital store. Are you excited? I’m not. In reality, I’m just really tired of every brand mixing together, regardless of whether it makes sense or is needed, as if concocting the world’s worst stew. Read More

Whenever a new blockbuster first-person shooter drops, gamers limber up so they can once again argue over how multiplayer matches get made and the algorithmic systems that determine who plays against whom and when. The recent release of Call of Duty: Modern Warfare III is no exception—not long after its multiplayer servers booted on November 10, players began flocking to Reddit, X (Twitter), and everywhere in between to complain about the quality (or perceived lack thereof) of Activision’s matchmaking. But, as with so many issues in the gaming industry, there’s a serious lack of nuance and true understanding at play here. Read More

Remember when it took us seven years to get a new The Last of Us game? Remember when there was even a question about whether or not we’d ever get a sequel to Naughty Dog’s post-apocalyptic action game because the ending was so intentionally ambiguous and thought-provoking?
Now, it seems we can’t go a year without being reminded that Sony thinks as many people should experience this series as possible, while folks associated with the HBO adaptation praise the game in ways that border on the absurd. Now, we’re getting a remaster of The Last of Us Part II, and it feels like we’re reaching peak Last of Us fatigue. Read More
Open Combat Missions are a fresh idea worth carrying over to future Call of Duty games.
[ad_2]
Kotaku Staff
Source link

[ad_1]
Debating AI might seem like a pointless venture – but you have a good chance of being told you’re right, even when you’re not.
Artificial intelligence, specifically large language models like ChatGPT, has shown remarkable capabilities in tackling complex questions. However, a study by The Ohio State University reveals an intriguing vulnerability: ChatGPT can be easily convinced that its correct answers are wrong. This discovery sheds light on the AI’s reasoning mechanisms and highlights potential limitations.
Researchers conducted an array of debate-like conversations with ChatGPT, challenging the AI on its correct answers. The results were startling. Despite providing correct solutions initially, ChatGPT often conceded to invalid arguments posed by users, sometimes even apologizing for its supposedly incorrect answers. This phenomenon raises critical questions about the AI’s understanding of truth and its reasoning process.
AI’s prowess in complex reasoning tasks is well-documented. Yet, this study exposes a potential flaw: the inability to defend correct beliefs against trivial challenges. Boshi Wang, the study’s lead author, notes this contradiction. Despite AI’s efficiency in identifying patterns and rules, it struggles with simple critiques, similar to someone who copies information without fully comprehending it.
The study’s findings imply significant concerns. For example, an AI system’s failure to uphold correct information in the face of opposition could lead to misinformation or wrong decisions, especially in critical fields like healthcare and criminal justice. The researchers aim to assess the safety of AI systems for human interaction, given their growing integration into various sectors.
Determining why ChatGPT fails to defend its correct answers is challenging due to the “black-box” nature of LLMs. The study suggests two possible causes: the base model’s lack of reasoning and truth understanding, and the influence of human feedback, which may teach the AI to yield to human opinion rather than stick to factual correctness.
Despite identifying this issue, solutions are not immediately apparent. Developing methods to enhance AI’s ability to maintain truth in the face of opposition will be crucial for its safe and effective application. The study marks an important step in understanding and improving the reliability of AI systems.
Source: “ChatGPT often won’t defend its answers — even when it is right” — ScienceDaily
[ad_2]
WTF
Source link

[ad_1]
The OpenAI power struggle that captivated the tech world after co-founder Sam Altman was fired has finally reached its end — at least for the time being. But what to make of it?
It feels almost as though some eulogizing is called for — like OpenAI died and a new, but not necessarily improved, startup stands in its midst. Ex-Y Combinator president Altman is back at the helm, but is his return justified? OpenAI’s new board of directors is getting off to a less diverse start (i.e. it’s entirely white and male), and the company’s founding philanthropic aims are in jeopardy of being co-opted by more capitalist interests.
That’s not to suggest that the old OpenAI was perfect by any stretch.
As of Friday morning, OpenAI had a six-person board — Altman, OpenAI chief scientist Ilya Sutskever, OpenAI president Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies. The board was technically tied to a nonprofit that had a majority stake in OpenAI’s for-profit side, with absolute decision-making power over the for-profit OpenAI’s activities, investments and overall direction.
OpenAI’s unusual structure was established by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s exceptionally brief (500-word) charter outlines that the board make decisions ensuring “that artificial general intelligence benefits all humanity,” leaving it to the board’s members to decide how best to interpret that. Neither “profit” nor “revenue” get a mention in this North Star document; Toner reportedly once told Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with the [nonprofit’s] mission.”
Maybe the arrangement would have worked in some parallel universe; for years, it appeared to work well enough at OpenAI. But once investors and powerful partners got involved, things became… trickier.
After the board abruptly canned Altman on Friday without notifying just about anyone, including the bulk of OpenAI’s 770-person workforce, the startup’s backers began voicing their discontent in both private and public.
Satya Nadella, the CEO of Microsoft, a major OpenAI collaborator, was allegedly “furious” to learn of Altman’s departure. Vinod Khosla, the founder of Khosla Ventures, another OpenAI backer, said on X (formerly Twitter) that the fund wanted Altman back. Meanwhile, Thrive Capital, the aforementioned Khosla Ventures, Tiger Global Management and Sequoia Capital were said to be contemplating legal action against the board if negotiations over the weekend to reinstate Altman didn’t go their way.
Now, OpenAI employees weren’t unaligned with these investors from outside appearances. On the contrary, close to all of them — including Sutskever, in an apparent change of heart — signed a letter threatening the board with mass resignation if they opted not to reverse course. But one must consider that these OpenAI employees had a lot to lose should OpenAI crumble — job offers from Microsoft and Salesforce aside.
OpenAI had been in discussions, led by Thrive, to possibly sell employee shares in a move that would have boosted the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden exit — and OpenAI’s rotating cast of questionable interim CEOs — gave Thrive cold feet, putting the sale in jeopardy.
But now after several breathless, hair-pulling days, some form of resolution’s been reached. Altman — along with Brockman, who resigned on Friday in protest over the board’s decision — is back, albeit subject to a background investigation into the concerns that precipitated his removal. OpenAI has a new transitionary board, satisfying one of Altman’s demands. And OpenAI will reportedly retain its structure, with investors’ profits capped and the board free to make decisions that aren’t revenue-driven.
Salesforce CEO Marc Benioff posted on X that “the good guys” won. But that might be premature to say.
Sure, Altman “won,” besting a board that accused him of “not [being] consistently candid” with board members and, according to some reporting, putting growth over mission. In one example of this alleged rogueness, Altman was said to have been critical of Toner over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board. In another, Altman “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first developer conference.
The board didn’t explain themselves even after repeated chances, citing possible legal challenges. And it’s safe to say that they dismissed Altman in an unnecessarily histrionic way. But it can’t be denied that the directors might have had valid reasons for letting Altman go, at least depending on how they interpreted their humanistic directive.
The new board seems likely to interpret that directive differently.
Currently, OpenAI’s board consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only holdover from the original board) and Larry Summers, the economist and former Harvard president. Taylor is an entrepreneur’s entrepreneur, having co-founded numerous companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he came to Salesforce). Meanwhile, Summers has deep business and government connections — an asset to OpenAI, the thinking around his selection probably went, at a time when regulatory scrutiny of AI is intensifying.
The directors don’t seem like an outright “win” to this reporter, though — not if diverse viewpoints were the intention. While six seats have yet to be filled, the initial four set a rather homogenous tone; such a board would in fact be illegal in Europe, which mandates companies reserve at least 40% of their board seats for women candidates.
I’m not the only one who’s disappointed. A number of AI academics turned to X to air their frustrations earlier today.
Noah Giansiracusa, a math professor at Bentley University and the author of a book on social media recommendation algorithms, takes issue both with the board’s all-male makeup and the nomination of Summers, who he notes has a history of making unflattering remarks about women.
“Whatever one makes of these incidents, the optics are not good, to say the least — particularly for a company that has been leading the way on AI development and reshaping the world we live in,” Giansiracusa said via text. “What I find particularly troubling is that OpenAI’s main aim is developing artificial general intelligence that ‘benefits all of humanity.’ Since half of humanity are women, the recent events don’t give me a ton of confidence about this. Toner most directly representatives the safety side of AI, and this has so often been the position women have been placed in, throughout history but especially in tech: protecting society from great harms while the men get the credit for innovating and ruling the world.”
Christopher Manning, the director of Sanford’s AI Lab, is slightly more charitable than — but in agreement with — Giansiracusa in his assessment:
“The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company.”
Inequity plagues the AI industry, from the annotators who label the data used to train generative AI models to the harmful biases that often emerge in those trained models, including OpenAI’s models. Summers, to be fair, has expressed concern over AI’s possibly harmful ramifications — at least as they relate to livelihoods. But the critics I spoke with find it difficult to believe that a board like OpenAI’s present one will consistently prioritize these challenges, at least not in the way that a more diverse board would.
It raises the question: Why didn’t OpenAI attempt to recruit a well-known AI ethicist like Timnit Gebru or Margaret Mitchell for the initial board? Were they “not available”? Did they decline? Or did OpenAI not make an effort in the first place? Perhaps we’ll never know.
Reportedly, OpenAI considered Laurene Powell Jobs and Marissa Mayer for board roles, but they were deemed too close to Altman. Condoleezza Rice’s name was also floated, but ultimately passed over.
OpenAI has a chance to prove itself wiser and worldlier in selecting the five remaining board seats — or three, should Altman and a Microsoft executive take one each (as has been rumored). If they don’t go a more diverse way, what Daniel Colson, the director of the think tank the AI Policy Institute, said on X may well be true: a few people or a single lab can’t be trusted with ensuring AI is developed responsibly.
Updated 11/23 at 11:26 a.m. Eastern: Embedded a post from Timnit Gebru and information from a report about passed-over potential OpenAI women board members.
[ad_2]
Kyle Wiggers
Source link

[ad_1]
Institutional real estate investors have historically struggled to buy up tonnes of family homes (the so-called ‘Single Family rental sector’) so they can turn us all into rental slaves and lock millions to a rentier economy. A few startups are trying to ease the ‘pain’ of these rapacious harbingers of hyper capitalism.
Immo Capital, a platform for managing residential real estate portfolios has raised $90.7 million. Bricklane is another platform for rental housing (raised £6 million out of London). And Casafari in Spain/Portugal has raised $20.5 million.
Into this market has launched DoorFeed, founded by James Kirimy, an early Uber UK employee. It’s secured a new funding round of €7 million Seed extension round led by Motive Ventures (backed by Private Equity firm Apollo, owners of Yahoo! and thus TechCrunch), with participation of Stride VC and Seedcamp. The firm previously raised a €3.5 million seed led by Stride and Seedcamp in 2021, and a €1.5 million debt financing by BPI France in 2022.
In simple terms, DoorFeed provides the data platform and operations for investment funds to assemble and manage large scale portfolios of apartments and houses. It also allows them to figure out which houses have a bad energy performance, and then renovate them, possibly unlocking ESG credits from governments, it claims.
It makes money via a sourcing fee and renovation management fee, as well as an annual property and asset management fee.
Looking at the market independently, these companies are clearly onto something that would make a hedge fund manager blush.
Investment in European living assets exceeded all other real estate assets classes in the second quarter at €10.6 billion, according to JLL, and 20% of the market is buy-to-let investors.
[ad_2]
Mike Butcher
Source link

[ad_1]
The OpenAI power struggle that captivated the tech world after co-founder Sam Altman was fired has finally reached its end — at least for the time being. But what to make of it?
It feels almost as though some eulogizing is called for — like OpenAI died and a new, but not necessarily improved, startup stands in its midst. Ex-Y Combinator president Altman is back at the helm, but is his return justified? OpenAI’s new board of directors is getting off to a less diverse start (i.e. it’s entirely white and male), and the company’s founding philanthropic aims are in jeopardy of being co-opted by more capitalist interests.
That’s not to suggest that the old OpenAI was perfect by any stretch.
As of Friday morning, OpenAI had a six-person board — Altman, OpenAI chief scientist Ilya Sutskever, OpenAI president Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies. The board was technically tied to a nonprofit that had a majority stake in OpenAI’s for-profit side, with absolute decision-making power over the for-profit OpenAI’s activities, investments and overall direction.
OpenAI’s unusual structure was established by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s exceptionally brief (500-word) charter outlines that the board make decisions ensuring “that artificial general intelligence benefits all humanity,” leaving it to the board’s members to decide how best to interpret that. Neither “profit” nor “revenue” get a mention in this North Star document; Toner reportedly once told Altman’s executive team that triggering OpenAI’s collapse “would actually be consistent with the [nonprofit’s] mission.”
Maybe the arrangement would have worked in some parallel universe; for years, it appeared to work well enough at OpenAI. But once investors and powerful partners got involved, things became… trickier.
After the board abruptly canned Altman on Friday without notifying just about anyone, including the bulk of OpenAI’s 770-person workforce, the startup’s backers began voicing their discontent in both private and public.
Satya Nadella, the CEO of Microsoft, a major OpenAI collaborator, was allegedly “furious” to learn of Altman’s departure. Vinod Khosla, the founder of Khosla Ventures, another OpenAI backer, said on X (formerly Twitter) that the fund wanted Altman back. Meanwhile, Thrive Capital, the aforementioned Khosla Ventures, Tiger Global Management and Sequoia Capital were said to be contemplating legal action against the board if negotiations over the weekend to reinstate Altman didn’t go their way.
Now, OpenAI employees weren’t unaligned with these investors from outside appearances. On the contrary, close to all of them — including Sutskever, in an apparent change of heart — signed a letter threatening the board with mass resignation if they opted not to reverse course. But one must consider that these OpenAI employees had a lot to lose should OpenAI crumble — job offers from Microsoft and Salesforce aside.
OpenAI had been in discussions, led by Thrive, to possibly sell employee shares in a move that would have boosted the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden exit — and OpenAI’s rotating cast of questionable interim CEOs — gave Thrive cold feet, putting the sale in jeopardy.
But now after several breathless, hair-pulling days, some form of resolution’s been reached. Altman — along with Brockman, who resigned on Friday in protest over the board’s decision — is back, albeit subject to a background investigation into the concerns that precipitated his removal. OpenAI has a new transitionary board, satisfying one of Altman’s demands. And OpenAI will reportedly retain its structure, with investors’ profits capped and the board free to make decisions that aren’t revenue-driven.
Salesforce CEO Marc Benioff posted on X that “the good guys” won. But that might be premature to say.
Sure, Altman “won,” besting a board that accused him of “not [being] consistently candid” with board members and, according to some reporting, putting growth over mission. In one example of this alleged rogueness, Altman was said to have been critical of Toner over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board. In another, Altman “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first developer conference.
The board didn’t explain themselves even after repeated chances, citing possible legal challenges. And it’s safe to say that they dismissed Altman in an unnecessarily histrionic way. But it can’t be denied that the directors might have had valid reasons for letting Altman go, at least depending on how they interpreted their humanistic directive.
The new board seems likely to interpret that directive differently.
Currently, OpenAI’s board consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only holdover from the original board) and Larry Summers, the economist and former Harvard president. Taylor is an entrepreneur’s entrepreneur, having co-founded numerous companies, including FriendFeed (acquired by Facebook) and Quip (through whose acquisition he came to Salesforce). Meanwhile, Summers has deep business and government connections — an asset to OpenAI, the thinking around his selection probably went, at a time when regulatory scrutiny of AI is intensifying.
The directors don’t seem like an outright “win” to this reporter, though — not if diverse viewpoints were the intention. While six seats have yet to be filled, the initial four set a rather homogenous tone; such a board would in fact be illegal in Europe, which mandates companies reserve at least 40% of their board seats for women candidates.
I’m not the only one who’s disappointed. A number of AI academics turned to X to air their frustrations earlier today.
Noah Giansiracusa, a math professor at Bentley University and the author of a book on social media recommendation algorithms, takes issue both with the board’s all-male makeup and the nomination of Summers, who he notes has a history of making unflattering remarks about women.
“Whatever one makes of these incidents, the optics are not good, to say the least — particularly for a company that has been leading the way on AI development and reshaping the world we live in,” Giansiracusa said via text. “What I find particularly troubling is that OpenAI’s main aim is developing artificial general intelligence that ‘benefits all of humanity.’ Since half of humanity are women, the recent events don’t give me a ton of confidence about this. Toner most directly representatives the safety side of AI, and this has so often been the position women have been placed in, throughout history but especially in tech: protecting society from great harms while the men get the credit for innovating and ruling the world.”
Christopher Manning, the director of Sanford’s AI Lab, is slightly more charitable than — but in agreement with — Giansiracusa in his assessment:
“The newly formed OpenAI board is presumably still incomplete,” he told TechCrunch. “Nevertheless, the current board membership, lacking anyone with deep knowledge about responsible use of AI in human society and comprising only white males, is not a promising start for such an important and influential AI company.”
Inequity plagues the AI industry, from the annotators who label the data used to train generative AI models to the harmful biases that often emerge in those trained models, including OpenAI’s models. Summers, to be fair, has expressed concern over AI’s possibly harmful ramifications — at least as they relate to livelihoods. But the critics I spoke with find it difficult to believe that a board like OpenAI’s present one will consistently prioritize these challenges, at least not in the way that a more diverse board would.
It raises the question: Why didn’t OpenAI attempt to recruit a well-known AI ethicist like Timnit Gebru or Margaret Mitchell for the initial board? Were they “not available”? Did they decline? Or did OpenAI not make an effort in the first place? Perhaps we’ll never know.
OpenAI has a chance to prove itself wiser and worldlier in selecting the five remaining board seats — or three, should Altman and a Microsoft executive take one each (as has been rumored). If they don’t go a more diverse way, what Daniel Colson, the director of the think tank the AI Policy Institute, said on X may well be true: a few people or a single lab can’t be trusted with ensuring AI is developed responsibly.
[ad_2]
Kyle Wiggers
Source link

[ad_1]
San Francisco — OpenAI said Tuesday that its co-founder Sam Altman would return to the tech company as CEO, just days after he was fired by its board of directors and then quickly announced that he was joining Microsoft.
“We have reached an agreement in principle for Sam to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo,” OpenAI said in a post on the social media platform X.
In his own statement on the platform, Altman said he loved OpenAI and that everything he’d done “over the past few days has been in service of keeping this team and its mission together.”
OpenAI also said it is forming a new board that will include former Salesforce co-CEO Bret Taylor, who will serve as chair; economist and former U.S. Treasury Secretary Larry Summers; and existing director Adam D’Angelo, CEO of Quora. Three directors involved in the decision to oust Altman — Ilya Sutskever, Helen Toner and Tasha McCauley — are leaving the board.
Dan Ives, an analyst with Wedbush Securities, said he expects the overhaul to OpenAI’s board to strengthen its governance. “The former, now infamous, board members are finally gone after the failed coup, and now in essence OpenAI will be virtually the same than before this soap opera began,” he said in report.
Altman said that when he decided on Sunday evening to join Microsoft, “it was clear that was the best path for me and the team,” but he said the new board announced by OpenAI — and the support of Microsoft’s chairman and CEO Satya Nadella — he was looking forward to returning to the company he helped establish and “to building on our strong partnership with” the software giant.
Nadella said Microsoft’s leadership had been “encouraged by the changes to the OpenAI board” and that the company believed they were “a first essential step on a path to more stable, well-informed, and effective governance.”
OpenAI, which makes the popular artificial intelligence powered chatbot ChatGPT, said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead OpenAI.
One Wall Street research firm said, however, that it believed tensions had arisen over Altman’s push to develop more advanced products.
“These tensions likely resulted in frustrating communications and Sam making some operational decisions without keeping the board fully aware,” said New Street Research in a Monday research note. “The coup, and the sibylline associated blog post, about Sam not being ‘consistently candid in his communications with the board, hindering its ability to exercise its responsibilities’ resulted from this situation.”
One board member tweeted that they regretted their participation in the decision to oust Altman.
“I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company,” wrote board member Ilya Sutskever in a Monday morning social media post.
Altman catapulted ChatGPT to global fame while serving as company CEO and in the past year has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence.
[ad_2]

[ad_1]
AI Chatbot ChatGPT from OpenAI has forecasted a tremendous price movement for the Solana (SOL) cryptocurrency, indicating an 8x price surge for the token from its current price level.
ChatGPT believes Solana is well positioned for a bullish run, expecting to be at the $1,000 mark by 2024. The bot’s prediction results from positive market developments and the broad use of the cryptocurrency’s blockchain.
The bot has also been seen predicting that the price of SOL could reach $1,000 by the end of 2024. This is due to the cosmic surge of innovations and widespread adoption.
Solana propelled by a cosmic surge of innovation and widespread adoption, could potentially reach a stellar price of $1000 by the end of December 2024.
ChatGPT‘s prospects seem promising due to several recent factors that spark growth for the cryptocurrency. The crypto’s blockchain has recently garnered strong interest from the cryptocurrency community.
Its excellent performance, minimal transaction costs, and scalability have drawn an increasing number of users and developers. This sparks increased adoption of Solana-based innovations, positioning the digital asset for a potential price increase.
In addition, the crypto asset’s increase in demand for leverage longs could also buttress this prediction. SOL reached its highest level of futures open trade since its all-time high price of $260 in November 2021.
The demand for the cryptocurrency is anticipated to increase as the Solana ecosystem grows, pushing up the asset’s price.
The crypto’s asset Total Value Locked (TVL) is not left out. Solana’s TVL recently experienced a significant surge in its TVL. Its TVL was approximately valued at $409.68 million, but now $584.59 million, indicating over a 42% increase, according to DefiLlama.
Another factor that could propel the asset’s price is the current bullish sentiment of the cryptocurrency market. Without a doubt, Solana has been the market’s most optimistic large-cap cryptocurrency this year.
SOL has increased by approximately 550% since the beginning of 2023. This puts it at the fifth position among all the top 100 cryptocurrencies in terms of performance.
Last week, SOL experienced a significant price surge, reaching its yearly high of $68. Due to the general attitude of the market, the price of Solana could thrive in this conducive atmosphere.
Currently, the crypto asset is trading at approximately $60 as of writing, indicating a 0.21% increase in the past 24 hours. Its market capitalization is currently valued at $25,435,629,906, indicating the same percentage increase in the past 24 hours, according to CoinMarketCap.
Featured image by iShock, chart by Tradingview.com
[ad_2]
Godspower Owie
Source link

[ad_1]
Watch CBS News
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.
[ad_2]

[ad_1]
Microsoft announced Monday that it has hired Sam Altman and another co-founder of ChatGPT maker OpenAI after they unexpectedly departed the company days earlier in a corporate shakeup that shocked the artificial intelligence world.
Microsoft Chairman and CEO Satya Nadella said Microsoft, the major investor in OpenAI — which was behind the chatbot that kicked off the generative AI craze — looked “forward to getting to know” OpenAI’s new chief executive, former Twitch leader Emmett Shear, and the rest of OpenAI’s management team.
Nathan Posner /Anadolu Agency via Getty Images
Nadella wrote on X, formerly known as Twitter, that Microsoft was “extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.”
Altman said “the mission continues,” in reply to Nadella on X.
Shear also took to X to confirm his appointment as OpenAI’s new CEO, writing Monday that, “Today I got a call inviting me to consider a once-in-a-lifetime opportunity: to become the interim CEO of @OpenAI. After consulting with my family and reflecting on it for just a few hours, I accepted.”
At the end of a long post, he added a denial that OpenAI fired Altman due to concerns about AI’s dangers.
“Before I took the job,” Shear wrote, “I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
OpenAI said Friday that Altman was pushed out after a review found he was “not consistently candid in his communications” with the board of directors, which had lost confidence in his ability to lead OpenAI.
Altman catapulted ChatGPT to global fame while serving as company CEO and in the past year has become Silicon Valley’s sought-after voice on the promise and potential dangers of artificial intelligence.
OpenAI last week had announced co-founder Brockman would step down as board chairman but remain on as president. Brockman followed with a post on X reprinting a message he sent to OpenAI employees in which he wrote, “based on today’s news, I quit.”
In another X post Friday night, Brockman said Altman was asked to join a video meeting with the company’s board members in which co-founder and chief scientist Ilya Sutskever informed Altman he was being fired.
Brockman added that he was informed of his removal from the board in a separate call with Sutskever a short time later.
“Sam and I are shocked and saddened by what the board did today,” Brockman wrote on X.
Based in San Francisco, the technology startup founded in 2015 released online chatbot ChatGPT about a year ago, setting off an AI boom that made Altman famous and landed him meetings with President Joe Biden and other leaders.
OpenAI’s ChatGPT and image generator Dall-E brought generative AI into the mainstream, while also stirring debate about the technology’s potential for replacing workers in a range of jobs and further blurring the line between content produced by machines and by people. OpenAI has also found itself the target of multiple lawsuits by authors, artists and others creators alleging that the company’s technology copied their work.
As CEO, Altman led a transformation of a little-known startup into a company engaged in talks to sell employee shares to investors at a valuation in excess of $80 billion, according to media outlets including Bloomberg News and The New York Times.
Outspoken and known for stirring controversy, Altman has also attracted attention for sounding the alarm about AI’s potential threat to humanity.
[ad_2]

[ad_1]
Worldcoin (WLD) experienced a sharp decline in its market price on Saturday after its founder, Sam Altman, was removed as the CEO of popular artificial intelligence company OpenAI. This development follows the heavy regulatory scrutiny on the crypto project due to privacy concerns.
In a shocking development on Friday, OpenAI announced a leadership change, stating that Sam Altman will immediately exit the company as its CEO.
In this statement, the company expressed its gratitude to Altman for contributing immensely to their development during his four-year tenure as their global leader.
However, after an intense review process by the board of directors, it was concluded that the Worldcoin founder had not been fully honest in his exchanges with the board, leading to a loss of confidence in his ability to continue his duties as CEO.
Following Altman’s departure, Mira Murati, OpenAI’s chief technology officer, will now act as interim CEO pending the appointment of a permanent successor.
However, as one could expect, Altman’s removal as OpenAI CEO, combined with the scathing statements in the company’s announcement, has created a negative sentiment around the Worldcoin project.
According to data from CoinMarketCap, WLD is currently down by 12.75% over the last 24 hours. Meanwhile, the token’s daily trading volume has managed to remain afloat with a 15.13% gain.
Worldcoin was officially launched in July with the goal of creating the largest digital identity and financial network. The project relies on the use of iris-scanning orbs to physically admit new members, prompting concerns about privacy, anonymity, and user data protection.
After Altman’s departure from OpenAI, there are currently speculations on Worldcoin’s future trajectory. Clearly, Worldcoin benefited from Altman’s visibility as OpenAI’s CEO, as reflected in the token’s current downtrend.
Notably, WLD gained by over 25% in October, with many analysts citing anticipation of the OpenAI developer conference in November as the driving force.
Therefore, the removal of Altman from OpenAI may not bode well for Worldcoin in terms of credibility and public investor sentiments. On the other hand, Worldcoin could soon recover from its market slump and rise to higher heights despite this new challenge.
Following its public official launch in July, Worldcoin came under much criticism from global regulators who expressed concerns over operations in regard to the collection of data using iris scanning orbs and the potential applications of user data.
Notably, Worldcoin has been suspended from Kenya, while the governments of the United Kingdom and Germany have opened investigations into the project’s operation.
However, the crypto project has remained resilient despite these regulatory hurdles, marking a milestone of 4 million app downloads and 1 million monthly active users, as reported earlier in November.
At the time of writing, WLD trades at $1.85, with a 0.86% decline in the last hour. Meanwhile, the token’s market cap stands valued at $211.37 million.
WLD trading at $1.874 on the daily chart | Source: WLDUSDT chart on Tradingview.com
Featured image from The Independent, chart from Tradingview
[ad_2]
Semilore Faleti
Source link

[ad_1]
Artificial intelligence will bring changes to many professions, including law. But it’s also claiming victims who trust too much in its capabilities.
Among them is Zachariah Crabill, who was an overwhelmed rookie lawyer at a law firm in Colorado Springs when he gave in to the temptation of using ChatGPT in May.
The AI chatbot helped him write a motion in seconds, saving him hours of work, as local radio station KRDO reported in June. But after he filed the document with a Colorado court, he realized that something was amiss: Several lawsuit citations generated by ChatGPT were made up.
OpenAI’s ChatGPT is known to be confidently wrong, and in this case it simply created cases out of thin air that sounded convincing. Crabill did not check to make sure the cases were real before submitting his work.
Crabill admitted his mistake to the judge, who reported him to statewide office, and in July the young attorney was fired from his job at Baker Law Group.
In his statement to the court admitting his mistake, Crabill wrote, “I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting.”
Crabill isn’t the only lawyer to trust ChatGPT too much. In June, two lawyers were scolded and fined $5,000 by a federal judge in New York for submitting a legal brief that also cited nonexistent cases.
In sanctions against Steven A. Schwartz and Peter LoDuca of Levidow, Levidow & Oberman, the judge wrote: “Technological advances are commonplace, and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
“I did not comprehend that ChatGPT could fabricate cases,” Schwartz had earlier told the judge.
But Crabill, for his part, isn’t giving up on AI tools, despite the traumatic experience.
“I still use ChatGPT in my day-to-day, much like most people use Google on the job,” he told Business Insider. Indeed he has since started a company that provides legal services via AI.
In a Washington Post piece published on Thursday, Crabill said he would likely use AI tools designed specifically for lawyers to aid in his writing and research.
He added, “There’s no point in being a naysayer or being against something that is invariably going to become the way of the future.”
[ad_2]
Steve Mollman
Source link

[ad_1]
Sam Altman, the now former CEO of OpenAI, has departed his role and is leaving its board, according to a company post on Friday. But questions about his role at other entities like Worldcoin, the crypto project he co-founded, remain up in the air as its token falls on the news.
Worldcoin’s token, WLD, fell more than 13% on the day, to $1.91, CoinMarketCap data showed. When asked about Altman’s future at Worldcoin or its plans going forward, Worldcoin did not respond to TechCrunch’s request for comment.
Altman’s crypto project raised $115 million in May in a Series C round led by Blockchain Capital. In March, TechCrunch reported Altman was on the board of Worldcoin, but is not involved in the “day-to-day” operations.
Do you work at OpenAI or Worldcoin and know more? Get in touch.
Worldcoin obtains users by scanning irises through its Orb, which then assigns users an “iris code” or “World ID” that grants users access to the projects’ application and provides them with “a digital passport,” Tiago Sada, head of product for Tools for Humanity and a core contributor to Worldcoin, said on TechCrunch’s Chain Reaction podcast in September. The verification process purportedly allows people to prove their identity, and the iris code is used to make sure they don’t go and get another one.
In August, Worldcoin faced pushback from countries, including Kenya, which halted the project from scanning any more of its citizens’ eyeballs (and the project ignored initial orders). Worldcoin has faced backlash from critics, who allege the company targets developing economies. The project gives most participants (outside the U.S. and some other countries) 25 WLD tokens, worth roughly $48, in exchange for signing up, which could be seen as exploitative.
Sada said that giving out the free tokens and going to developing countries was fair because most projects, especially in crypto and tech, focus on emerging markets, as “those are the easier ones to operate in.”
While OpenAI stated Friday that the board “no longer has confidence in [Altman’s] ability to continue leading” the company, its statement didn’t fully explain why Altman was fired or where he stands with other related organizations, like Worldcoin.
Worldcoin’s application has over four million downloads and its active users are “more than double” globally, according to a blog post from the beginning of November. There are more than 2.4 million “unique humans” on Worldcoin and in the most recent seven days at the time of writing, about 53,800 new accounts have been made, and there have been over 59,000 daily wallet transactions, according to the company’s website.
[ad_2]
Jacquelyn Melinek
Source link

[ad_1]
TeddyGPT Party: Redefining Eco-Friendly and Interactive Children’s Party Planning
VANCOUVER, British Columbia, November 15, 2023 (Newswire.com)
–
Toymint is thrilled to announce the launch of “TeddyGPT Party,” a fusion of its beloved AI companion TeddyGPT and its modern Teddy Party platform. Together, they bring a unique chat-with-teddy interface that plans a child’s birthday party, combining convenience with a touch of magic.
Available now for free at teddygpt.com/party, TeddyGPT Party is set to revolutionize children’s party planning, making it more accessible, environmentally friendly, and enjoyable.
TeddyGPT Party: A New Era in Party Planning
Interactive Chat Interface: Plan events effortlessly by chatting with TeddyGPT. This friendly AI teddy bear guides users through organizing a memorable party.
Sustainable Invitations: Quickly send stylish, electronic invitations via email or text, significantly reducing paper waste.
Customizable Party Themes: The platform suggests personalized, eco-friendly party themes and activities, aligning with the company’s commitment to sustainability.
Experience the Charm of AI-Powered Party Planning
Join us in experiencing the future of party planning. Access the TeddyGPT Party’s unique chat interface for free at teddygpt.com/party and witness how planning a child’s birthday party can be both fun and environmentally conscious.
About Toymint
Toymint, a leader in sustainable and educational toys, is redefining how families celebrate. Our mission is to blend cutting-edge technology with traditional values, ensuring that every celebration is meaningful and eco-friendly.
About TeddyGPT
TeddyGPT combines the warmth of a teddy bear with advanced AI technology, providing an interactive and educational experience. It adapts to each child’s unique preferences, making it a beloved companion for all ages. With the introduction of TeddyGPT Party, TeddyGPT takes on a new role as a party planner, adding a delightful twist to birthday celebrations.
About Teddy Party
Teddy Party transforms birthday planning into an effortless and impactful experience. “We envision birthday celebrations as not just events but lifelong memories,” says Krystal Commons, co-founder and Chief Experience Officer of Toymint. Teddy Party enables eco-friendly and memorable party planning, steering away from outdated practices towards a more sustainable future.
Source: Toymint
[ad_2]

[ad_1]
Welcome to Startups Weekly. Sign up here to get it in your inbox every Friday.
If you’ve been following along with this newsletter, you’ll have noticed that I’ve been a little bit curious about AI — especially generative AI. I’m likely not the first person to make this observation, but AIs are extremely, painfully average. I guess that’s kind of the point of them — train them on all knowledge, and mediocrity will surface.
The trick is to only use AI tools for stuff that you, yourself, aren’t very good at. If you’re an expert artist or writer, it’ll let you down. The truth, though, is that most people aren’t great writers, and so ChatGPT and its brethren are going to be a massive benefit to white-collar workers everywhere. Well, until we collectively discover that a house cleaner has greater job security than an office manager or a secretary, at least.
On that cheerful note, let’s sniff about in the startup bushes and see what tasty morsels we can scare up from the depths of the TechCrunch archive from the past week. . . .
Image Credits: Kirillm (opens in a new window) / Getty Images
I know, this happens every damn week: I start with the intention of writing this newsletter without going up to my eyelashes into the AI morass, and every week, y’all keep reading our AI news as if your livelihood depends on it. Because, well, it’s entirely possible it does, I suppose.
The GPT Store, introduced by OpenAI, enables developers to create custom GPT-based conversational AI models and sell them in a new marketplace. This initiative is designed to expand the accessibility and commercial use of AI, similar to how app stores revolutionized software distribution. Developers can not only build but also monetize their AI creations, opening up a new avenue for innovation and entrepreneurship in the field of artificial intelligence. Of course, that little update — and the platform now natively being able to read PDFs and websites — is a substantial threat to startups that had previously filled this gap in ChatGPT’s offerings, especially those whose business models are based on such features. It’s a reminder that building a business around another company’s API without a sustainable, stand-alone product is, perhaps, not the shrewdest business move.
AI is, of course, not just for startups. During Apple’s Q4 earnings call, the company’s CEO, Tim Cook, emphasized AI as a fundamental technology and highlighted recent AI-driven features like Personal Voice and Live Voicemail in iOS 17. He also confirmed that Apple is continuing to develop generative AI technologies — tellingly, without revealing specifics.
Heinlein would be horrified: Elon Musk announced that Twitter’s Premium Plus subscribers will soon have early access to xAI’s new AI system, Grok, once it exits early beta, positioning the chatbot as a perk for the platform’s $16/month ad-free service tier.
Brother, can you spare a GPU?: AWS introduced Amazon Elastic Compute Cloud (EC2) and Capacity Blocks for ML, a new service that enables customers to rent Nvidia GPUs for a set period, primarily for AI tasks like training or experimenting with machine learning models.
From zero to AI founder in one easy bootstrap: In “How to bootstrap an AI startup” on TC+, Michael Koch advises founders on maintaining control over their startup’s strategy and product by bootstrapping — yes, even in the oft-capital-intensive world of AI startups.
Image Credits: Darrell Etherington with assets from Getty under license
WeWork, once a high-flying startup valued at $47 billion, has filed for Chapter 11 bankruptcy protection, highlighting a staggering collapse. The company, which has over $18.6 billion of debt, received agreement from about 90% of its lenders to convert $3 billion of debt into equity in an attempt to improve its balance sheet and address its costly leases. On TC+, Alex notes what we kinda knew all along: that the core business just didn’t make sense.
In other venture news . . .
Ex-Twitter CEO raises third venture fund: 01 Advisors, the venture firm founded by former Twitter executives Dick Costolo and Adam Bain, has secured $395 million in capital commitments for its third fund, aimed at investing in Series B–stage startups focused on business software and fintech services.
Happy 10th unicornaversary: Alex reflects on the tenth anniversary of the term “unicorn,” which was initially coined right here on TechCrunch, to describe startups valued at over $1 billion.
You get a chip! You get a chip!: In response to a shortage of AI chips, Microsoft is updating its startup support program to offer selected startups free access to advanced Azure AI supercomputing resources to develop AI models.
Image Credits: Bryce Durbin / TechCrunch
Look, I’m not going to lie, I think most crypto is dumb, and I’ve seen only a handful of startups that use blockchains in a way that makes any sense whatsoever — most of them would have done just fine with a simple database — so I’ve been following Jacquelyn’s coverage of Bankman-Fried’s trial with a not insignificant amount of schadenfreude. It’s human to make mistakes, and startup founders are human, but if you’re defrauding the fuck out of people, you deserve all the comeuppance you can get.
Sam Bankman-Fried was the co-founder and CEO of the cryptocurrency exchange FTX and the trading firm Alameda Research (named specifically to not sound like a crypto company). He has been found guilty on all seven counts of fraud and money laundering.
The charges were related to a scheme involving misappropriating billions of dollars of customer funds deposited with FTX and misleading investors and lenders of both FTX and Alameda Research. After the five-week trial, the jury spent just four hours to reach its verdict.
The collapse of FTX and Alameda Research, which led to the indictment of Bankman-Fried about 11 months ago by the U.S. Department of Justice, was significant, with the executives allegedly stealing over $8 billion in customer funds.
Sentencing will happen next March, but if he gets smacked with the full weight of his actions, he will face a total possible sentence of 115 years in prison.
Jacquelyn did a heroic job covering the trial for TechCrunch, and it’s worth taking an afternoon to read through it all — the details are mind-boggling.
The house sometimes wins: Mr. Cooper, a mortgage and loan company, experienced a “cybersecurity incident” that led to an ongoing system outage. The company says it has taken steps to secure data and address the issue.
Can’t think of any downsides of the Hindenburg: The world’s largest aircraft, Pathfinder 1, is an electric airship prototype developed by LTA Research and funded by Sergey Brin. It was unveiled this week, promising a new era in sustainable air travel.
Arrival’s departure: The EV startup Arrival, which aimed to revolutionize electric vehicle production with its micro-factory model, is now facing severe operational challenges, including multiple layoffs, missed production targets, and noncompliance with SEC filing requirements, resulting in a plummet from a $13 billion valuation.
[ad_2]
Haje Jan Kamps
Source link