ReportWire

Tag: generative ai

  • NYC’s AI chatbot was caught telling businesses to break the law. The city isn’t taking it down

    NYC’s AI chatbot was caught telling businesses to break the law. The city isn’t taking it down

    [ad_1]

    NEW YORK — An artificial intelligence-powered chatbot created by New York City to help small business owners is under criticism for dispensing bizarre advice that misstates local policies and advises companies to violate the law.

    But days after the issues were first reported last week by tech news outlet The Markup, the city has opted to leave the tool on its official government website. Mayor Eric Adams defended the decision this week even as he acknowledged the chatbot’s answers were “wrong in some areas.”

    Launched in October as a “one-stop shop” for business owners, the chatbot offers users algorithmically generated text responses to questions about navigating the city’s bureaucratic maze.

    It includes a disclaimer that it may “occasionally produce incorrect, harmful or biased” information and the caveat, since-strengthened, that its answers are not legal advice.

    It continues to dole out false guidance, troubling experts who say the buggy system highlights the dangers of governments embracing AI-powered systems without sufficient guardrails.

    “They’re rolling out software that is unproven without oversight,” said Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University. “It’s clear they have no intention of doing what’s responsible.”

    In responses to questions posed Wednesday, the chatbot falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks. Contradicting two of the city’s signature waste initiatives, it claimed that businesses can put their trash in black garbage bags and are not required to compost.

    At times, the bot’s answers veered into the absurd. Asked if a restaurant could serve cheese nibbled on by a rodent, it responded: “Yes, you can still serve the cheese to customers if it has rat bites,” before adding that it was important to assess the “the extent of the damage caused by the rat” and to “inform customers about the situation.”

    A spokesperson for Microsoft, which powers the bot through its Azure AI services, said the company was working with city employees “to improve the service and ensure the outputs are accurate and grounded on the city’s official documentation.”

    At a press conference Tuesday, Adams, a Democrat, suggested that allowing users to find issues is just part of ironing out kinks in new technology.

    “Anyone that knows technology knows this is how it’s done,” he said. “Only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.”

    Stoyanovich called that approach “reckless and irresponsible.”

    Scientists have long voiced concerns about the drawbacks of these kinds of large language models, which are trained on troves of text pulled from the internet and prone to spitting out answers that are inaccurate and illogical.

    But as the success of ChatGPT and other chatbots have captured the public attention, private companies have rolled out their own products, with mixed results. Earlier this month, a court ordered Air Canada to refund a customer after a company chatbot misstated the airline’s refund policy. Both TurboTax and H&R Block have faced recent criticism for deploying chatbots that give out bad tax-prep advice.

    Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, said the stakes are especially high when the models are promoted by the public sector.

    “There’s a different level of trust that’s given to government,” West said. “Public officials need to consider what kind of damage they can do if someone was to follow this advice and get themselves in trouble.”

    Experts say other cities that use chatbots have typically confined them to a more limited set of inputs, cutting down on misinformation.

    Ted Ross, the chief information officer in Los Angeles, said the city closely curated the content used by its chatbots, which do not rely on large language models.

    The pitfalls of New York’s chatbot should serve as a cautionary tale for other cities, said Suresh Venkatasubramanian, the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University.

    “It should make cities think about why they want to use chatbots, and what problem they are trying to solve,” he wrote in an email. “If the chatbots are used to replace a person, then you lose accountability while not getting anything in return.”

    [ad_2]

    Source link

  • What’s Stopping You From Using Generative AI in Your Business? | Entrepreneur

    What’s Stopping You From Using Generative AI in Your Business? | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    Generative AI is catalyzing a significant paradigm shift in various business sectors, including coding, data science, content creation, virtual assistance, medical support, artistic innovation, media, marketing, game development, financial analysis and digital education, among others. This technology is enhancing business efficiencies broadly and transforming key industries such as healthcare, education and technology, as well as organizational workflows in numerous areas, thereby promising high returns. Consequently, generative AI is reshaping the future of work across a wide array of industries.

    MarketsandMarkets predicts substantial growth in the global AI industry, estimating it will reach $1,345.20 billion by 2030, with a compound annual growth rate (CAGR) of 36.8% from 2023 to 2030. In alignment with these financial expectations, the Infosys Knowledge Institute has shown that “firms that use AI well can increase enterprise profit by 38% and will help deliver $14 trillion of gross added value to corporations by 2035.”

    Yet, despite these promising financial forecasts, the actual deployment of AI technologies, particularly generative AI, falls short of expectations. Gartner survey results published in Harvard Business Review indicate a potential gap in AI adoption: While 70% of organizations were exploring generative AI as of March and April 2023, only 4% had fully implemented these technologies.

    So why do business leaders hesitate to integrate generative AI solutions into their operations?

    The reasons experts have identified for the slow adoption of generative AI primarily center on high costs, data complexities and a shortage of skilled professionals, among other factors. Additionally, there is prevalent mistrust among recent startups and technology industries concerning the effectiveness of available solutions and worries about potential pitfalls.

    Despite these numerous factors, here are three potential key reasons that stand out for their hesitancy to embrace AI, along with suggested strategies to address these challenges.

    Related: You Can Fear It and Still Use It — Why Are So Many American Workers Shy About AI?

    3 reasons business leaders hesitate to adopt generative AI

    1. Resistance to change and ego

    Leadership often hesitates to adopt new technologies, preferring traditional business methods. For example, surgeons at MGH only adopted the Da Vinci surgical robot after losing patients to hospitals using the technology, as they found it hard to leave established practices for new training methods. Similarly, many business leaders stick to conventional workflows, perceiving new changes like AI adoption as disruptive, resource-draining and costly. Their personal discomfort with AI may lead them to reject such innovations, claiming they are “too busy” to explore new technological advancements.

    2. Lack of expertise in the field of AI applications

    The problem of leadership resistance is further exacerbated by the shortage of specialized training or expertise, coupled with the challenge of retaining skilled professionals in the realm of artificial intelligence (AI) applications. Despite the burgeoning enthusiasm for AI, there remains a critical need for specialized AI experts within business leadership teams. Although sourcing highly skilled individuals in the market is challenging, retaining them poses an additional hurdle, potentially leading businesses to shy away from considering AI solutions.

    3. The majority of generative AI solutions are still in the beta phase

    While generative AI is transforming industries and businesses globally, most of its applications, including ChatGPT4, are still in the beta phase and continuously improving. The more these systems are used, the more data they gather to refine their responses.

    However, they are not infallible; they can still experience “hallucinations” or produce errors, which might be detrimental depending on the context of use. Such inaccuracies or data security and privacy issues with generative AI applications can have serious implications, particularly in business settings, where they can affect clients or customers and potentially harm the business’s reputation. These incidents can lead to distrust among business leaders, resulting in hesitation or outright refusal to adopt AI solutions.

    Related: This Is the New ChatGPT Trend That Will Enhance Your Business

    How to thoughtfully adopt AI into your business

    1. Weigh the benefits and risks for business impact

    Businesses that adapt swiftly to change often outperform their competitors, underscoring the importance for leadership to stay abreast of new trends using AI solutions and be ready to embrace them, even when the changes are challenging. A primary consideration is determining whether the AI trend is beneficial for the company’s operations, specifically if it can enhance current processes or workflows efficiently and cost-effectively.

    It’s advisable to engage external expert advisors to assess the advantages and risks of new technological trends, as well as their potential impact on the business. This evaluation process might entail multiple discussions with external AI specialists, financial advisors and internal teams to reach an informed decision regarding the consideration to adopt an AI solution.

    2. Conduct in-depth technical evaluations of AI solutions before you choose and implement one

    Once the decision to adopt an AI solution has been made, the subsequent crucial step is to conduct an exhaustive market search for AI vendor offerings. It is vital to undertake comprehensive evaluations of AI technology solutions before adoption to minimize the risk of potential failures that could negatively impact a business.

    Investing in a meticulous and thorough due diligence process, which includes multiple rounds of assessment by different AI technology experts, is imperative. This method is more than a precautionary measure — it’s a strategic investment that could save a business not just thousands, but potentially millions of dollars in the long run by ensuring that the selected AI solutions are well-aligned with the business’s needs and are set up for successful integration.

    3. Make time for ongoing specialized AI workshops and updates for leadership teams

    It is crucial for business leaders to allocate time to understand the basics of available AI business solutions in the market. A recommended approach is to organize regular lunch-hour workshops tailored toward specific AI solutions for leadership, led by AI industry experts. These workshops are an effective strategy to keep leaders engaged and informed about the latest developments in AI technology. By integrating customized learning/training into their schedules in an efficient and focused manner, leaders can stay abreast of advancements and better position their businesses to leverage AI innovations.

    Related: How Cutting Edge AI Tech Could Be Your Answer to Tackling Stress

    As we navigate the burgeoning era of generative AI solutions, which is finding its application across various business sectors, it’s crucial for leadership to remain engaged and informed to avoid falling behind competitors. The dawn of this new generative AI era is upon us, escalating rapidly. To effectively leverage its advantages, business leaders need to be well-prepared and proactive, ensuring a deep understanding and strategic integration of these advanced technologies into their planning, doing so both efficiently and effectively to steer their business toward success.

    [ad_2]

    Sahar Hashmi, MD-PhD

    Source link

  • Change Your Attitude Towards AI — And Harness Its Power For Success | Entrepreneur

    Change Your Attitude Towards AI — And Harness Its Power For Success | Entrepreneur

    [ad_1]

    Opinions expressed by Entrepreneur contributors are their own.

    2023 was a year of major AI disruption. Particularly in the genre of prompt-based content creation, we saw the sporadic boom of unending tools. The public-facing version of Chat GTP reached millions of users within months of its launch.

    However, even with leaping numbers in favor of Gen AI, entrepreneurs are constantly wondering if it’s all hype or if Gen AI truly has the potential to bring long-term business benefits.

    Also, the constantly booming use case of prompt-based AI has brought entrepreneurs to the debate about the ethical use of AI. We must not forget that the foundational nature of AI links its resources to a large amount of unidentified data.

    While this means anyone without tech knowledge can leverage such foundational models of Gen AI, it also means that the process can yield default or less accurate information, leading to even data hazards.

    Related: 6 Positive Impacts of Artificial Intelligence on Digital Marketing

    A recent Accenture report says that 76% of C-suite leaders see generative AI as an opportunity for streamlining operations, reducing costs, and business growth. However, nearly 72% of respondents are investing in AI with caution due to concerns about its responsible use.

    Let’s first discuss the primary areas that hinder growth-minded businesses from implementing AI systems:

    1. Strategy: There’s palpable confusion about how AI can transform competitive dynamics and add value to business models. Most C-suite leaders are unsure how to map the financial and non-financial value generated by AI models so that they can generate the best value for their businesses. Also, in most cases, there exist huge complexities regarding the contractual and logistical viability of AI partnerships.
    2. Technology: Most leaders are still unsure which parts of their proprietary data and tech stacks should be made redundant or can be capitalized on more in the future. Leaders also witness massive capabilities and skill gaps with regard to AI system operations.
    3. Compliance: AI governance is rapidly evolving with increasing data threats. This puts leaders in uncertainty about how AI regulations will pan out across jurisdictions in the future.
    4. People: There is an increasing concern within human resources about the future of work as most perceive AI as their replacement. Next-gen leaders are still unsure how to rationalize this change management in their business.
    5. Stakeholders: Business leaders face resistance not just from human resources but also from partner networks. Most C-suite leaders struggle with AI adaptability in their partner networks, which lack tech sophistication in streamlining, securing, and reprocessing data fabrics for AI integration.

    In this article, we will discuss a few ways for business leaders to develop an actionable AI strategy. Let’s get started.

    1. Make amplified human capabilities the key focus

    AI modules are designed to evolve for sure. But they do lack emotional intelligence and moral thinking. When integrating AI into business, as a C-suite leader, you must remember that AI is not a means to replace your human resources but to complement and further augment their operational capabilities.

    There’s also a need to build confidence in your AI systems with some fundamental models. Your goal should be to create impenetrable and actionable yet adaptive AI strategies that align with global compliances and constraints.

    2. Have a designated AI control center

    At the moment, as much as the chasm is about reaping the benefits of Gen AI, more and more business leaders are concerned about AI hazards. Built with human-like tech intelligence, Gen AI can spiral out of hand without definite control.

    Also, you must align your AI strategies with a long-term business vision to reap maximum benefits. When integrating AI, you cannot centralize your business’s technical capabilities. Instead, you must have a leader with strong digital transformation capabilities and adept knowledge of AI risk and governance to design ROI metrics, establish business-wide best practices, align your strategies with financial goals, reduce risks, and, most importantly, capitalize on value from AI investments.

    3. Consider AI as a model to transform from ground zero

    We are at this very crucial crux of tech evolution, where tech investments can no longer be about transforming only specific business functions. And with AI, the need is more about reimagining entire business processes.

    Until now, you must have wondered, “How can AI make my business process efficient?” Now it’s time to consider, “How can AI help me innovate my business process further?”

    You must aim to drive the maximum impact of AI systems right from ground zero but with strict governance.

    Related: I Tested AI Tools So You Don’t Have To. Here’s What Worked — and What Didn’t.

    4. Take a look at gaps – both talent and technology

    Redundant tech architecture and skill gaps of resources are the biggest constraints of the AI growth strategy. To leverage the optimum value of AI, you need to take a closer look at restrictive data structures and outdated tech systems first.

    As a leader, you must restructure data fabrics, computational powers, and architectural capabilities to integrate AI into your enterprise systems. You also need to cleanse, secure, and process your proprietary data for seamless AI adoption.

    Also, you need to realize one critical aspect of AI expectations: It can’t improve work if your human resources are redundant and restricted to acceptance and adoption. Upskilling your employees to elevate them into AI and data-enabled roles is a crucial need of the era.

    Takeaway

    The uncertainties around AI integration are real. However, that shouldn’t stop you from reaping its proven potential. AI disruption is the order of the era, and as a business leader, you can evade its drawbacks with a monitored and practical AI strategy.

    Think about the post-Covid era, where high-growth businesses faded away because they refused to move with the much-needed tech disruptions. The tech world has surely moved towards more sustainable practices now, but as a fast-moving business, you cannot let go of a disruptive model that’s bound to be the hero of tomorrow.

    So, proceed with caution, take stock of your AI investment, but don’t hesitate to innovate with Gen AI.

    [ad_2]

    Divyesh Patel

    Source link

  • Apple and Google Collaboration – Gemini AI to Boost iPhone’s Smart Functions

    Apple and Google Collaboration – Gemini AI to Boost iPhone’s Smart Functions

    [ad_1]

    March 18 (Reuters) – In a significant development, Apple (AAPL.O) is currently in negotiations to integrate Google’s advanced Gemini artificial intelligence platform into its iPhone offerings, according to sources reported by Bloomberg News on Monday. The discussions revolve around the licensing of Gemini to enhance certain upcoming features of the iPhone’s software later this year, though specifics on the agreement’s terms, branding, or the exact implementation have yet to be solidified.

    Market Reaction and Strategic Timing

    Following the news, Alphabet’s shares saw a substantial increase of over 6% in early trading in the United States, with Apple’s stock also rising by 2.5%. Any formal announcement of a deal is anticipated to be postponed until June, coinciding with Apple’s yearly developer conference.

    Apple has been in conversations with OpenAI, the creators of ChatGPT, about incorporating its model, highlighting Apple’s keen interest in bolstering its AI capabilities.

    Potential Impact of the Deal

    Immediate comments from Apple, Google (owned by Alphabet, GOOGL.O), and OpenAI were not available in response to Reuters’ inquiries. A collaboration between these tech giants could significantly extend Google’s AI services across Apple’s vast ecosystem, which boasts over 2 billion active devices.

    This move is seen as a strategic effort by Google to strengthen its position against Microsoft-backed OpenAI, while simultaneously addressing Apple’s challenges in rapidly deploying AI applications—a factor contributing to Apple’s recent 10% share price decline and its loss of the title as the world’s most valuable company.

    Regulatory Considerations and Future Plans

    However, this deal might attract increased attention from U.S. regulators, given Google’s previous legal challenges regarding its search engine dominance and the financial arrangements with Apple to maintain its position.

    Daniel Ives, an analyst at Wedbush, highlighted the significance of this partnership, stating, “This strategic partnership is a critical element in Apple’s AI strategy, uniting with Google to leverage Gemini for powering AI features Apple plans to introduce.” He further emphasized the advantage for Google, noting the access to Apple’s substantial user base and the considerable licensing fees involved.

    Google’s January collaboration with Samsung, Apple’s competitor, to implement its Gemini AI in the Galaxy S24 smartphone series was part of its broader strategy to enhance Gemini’s adoption following initial setbacks. Apple CEO Tim Cook recently indicated the company’s substantial investment in generative AI, with plans to unveil its applications later in the year.

    According to Bloomberg, while Apple aims to deploy its in-house AI models for certain new functionalities in the forthcoming iOS 18, it is also exploring partnerships to drive generative AI features, including image creation and essay writing based on simple inputs.

    [ad_2]

    Srdjan Ilic

    Source link

  • Europe’s world-first AI rules get final approval from lawmakers. Here’s what happens next

    Europe’s world-first AI rules get final approval from lawmakers. Here’s what happens next

    [ad_1]

    LONDON — European Union lawmakers gave final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year.

    Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.

    “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote.

    Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.

    Here’s a look at the world’s first comprehensive set of AI rules:

    Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.

    The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.

    High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.

    Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.

    Other banned uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.

    The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications. The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up.

    They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.

    Developers of general purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.

    AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated.

    There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks,” which include OpenAI’s GPT4 — its most advanced system — and Google’s Gemini.

    The EU says it’s worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyberattacks.” They also fear generative AI could spread “harmful biases” across many applications, affecting many people.

    Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use.

    Brussels first suggested AI regulations in 2019, taking a familiar global role in ratcheting up scrutiny of emerging industries, while other governments scramble to keep up.

    In the U.S., President Joe Biden signed a sweeping executive order on AI in October that’s expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation.

    Chinese President Xi Jinping has proposed his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “ interim measures ” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China.

    Other countries, from Brazil to Japan, as well as global groupings like the United Nations and Group of Seven industrialized nations, are moving to draw up AI guardrails.

    The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.

    Rules for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.

    When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems.

    Violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.

    This isn’t Brussels’ last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation could be ahead after summer elections, including in areas like AI in the workplace that the new law partly covers, he said.

    [ad_2]

    Source link

  • Using AI to keep YouTube safe

    Using AI to keep YouTube safe

    [ad_1]

    Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators. But just as important, these opportunities must be balanced with the responsibility to protect the YouTube community. All content uploaded to YouTube is subject to Community Guidelines—regardless of how it’s generated—but AI will introduce new risks and will require new approaches.

    YouTube is in the early stages of this work and will continue to evolve the approach as the platform grows further. Here’s a look at what YouTube will roll out:

    1. Disclosure policy

    YouTube has implemented a policy requiring creators to clearly disclose when a video has been altered or synthetically created using AI tools. This helps viewers distinguish between genuine content and potentially misleading videos. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do. This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.

    2. Removal request via privacy policy

    YouTube will make it possible to facilitate removal requests of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process. Not all content will be removed from YouTube, and the platform will consider various factors when evaluating these requests. This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.

    3. Deploying AI to power content moderation

    YouTube has always used a combination of people and machine learning technologies to enforce our Community Guidelines, with more than 20,000 reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of its content moderation systems.

    One clear area of impact has been in identifying novel forms of abuse. When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps Google rapidly expand the set of information AI classifiers are trained on, meaning they are able to identify and catch this content much more quickly. Improved speed and accuracy of systems also allows them to reduce the amount of harmful content human reviewers are exposed to.

    Aside from innovative product features and policies, Google and YouTube will continue to make deep investments in the area of raising awareness and education to promote critical thinking skills and help people spot misinformation. These initiatives include fact-checking workshops, content creator collaborations (“Mag-ingat” song with Ben&Ben), and media literacy organization partnerships (#YOUThink publication with CANVAS), among many others. 

    [ad_2]

    Gadgets Magazine 17

    Source link

  • OpenAI has ‘full confidence’ in CEO Sam Altman after investigation, reinstates him to board

    OpenAI has ‘full confidence’ in CEO Sam Altman after investigation, reinstates him to board

    [ad_1]

    OpenAI is reinstating CEO Sam Altman to its board of directors and said it has “full confidence” in his leadership after the conclusion of an outside investigation into the company’s turmoil.

    The ChatGPT maker tapped the law firm WilmerHale to look into what led the company to abruptly fire Altman in November, only to rehire him days later. After months of investigation, it found that Altman’s ouster was a “consequence of a breakdown in the relationship and loss of trust” between him and the prior board, OpenAI said in a summary of the findings Friday. It did not release the full report.

    OpenAI also announced it has added three women to its board of directors: Dr. Sue Desmond-Hellman, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Instacart CEO Fidji Simo.

    The actions are a way for the San Francisco-based artificial intelligence company to show investors and customers that it is trying to move past the internal conflicts that nearly destroyed it last year and made global headlines.

    “I’m pleased this whole thing is over,” Altman told reporters Friday, adding that he’s been disheartened to see “people with an agenda” leaking information to try to harm the company or its mission and “pit us against each other.” At the same time, he said he’s learned from the experience and apologized for a dispute with a former board member he could have handled “with more grace and care.”

    In a parting shot, two board members who voted to fire Altman before getting pushed out themselves wished the new board well but said accountability is paramount when building technology “as potentially world-changing” as what OpenAI is pursuing.

    “We hope the new board does its job in governing OpenAI and holding it accountable to the mission,” said a joint statement from ex-board members Helen Toner and Tasha McCauley. “As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

    For more than three months, OpenAI said little about what led its then-board of directors to fire Altman on Nov. 17. An announcement that day said Altman was “not consistently candid in his communications” in a way that hindered the board’s ability to exercise its responsibilities. He also was kicked off the board, along with its chairman, Greg Brockman, who responded by quitting his job as the company’s president.

    Much of OpenAI’s conflicts have been rooted in its unusual governance structure. Founded as a nonprofit with a mission to safely build futuristic AI that helps humanity, it is now a fast-growing big business still controlled by a nonprofit board bound to its original mission.

    The investigation found the prior board acted within its discretion. But it also determined that Altman’s “conduct did not mandate removal,” OpenAI said. It said both Altman and Brockman remained the right leaders for the company.

    “The review concluded there was a significant breakdown in trust between the prior board, and Sam and Greg,” Bret Taylor, the board’s chair, told reporters Friday. “And similarly concluded that the board acted in good faith, that the board believed at the time that its actions would mitigate some of the challenges that it perceived and didn’t anticipate some of the instability.”

    The dangers posed by increasingly powerful AI systems have long been a subject of debate among OpenAI’s founders and leaders. But citing the law firm’s findings, Taylor said Altman’s firing “did not arise out of concerns regarding product safety or security.”

    Nor was it about OpenAI’s finances or any statements made to investors, customers or business partners, Taylor said.

    Days after his surprise ouster, Altman and his supporters — with backing from most of OpenAI’s workforce and close business partner Microsoft — helped orchestrate a comeback that brought Altman and Brockman back to their executive roles and forced out board members Toner, a Georgetown University researcher; McCauley, a scientist at the RAND Corporation; and another co-founder, Ilya Sutskever. Sutskever kept his job as chief scientist and publicly expressed regret for his role in ousting Altman.

    “I think Ilya loves OpenAI,” Altman said Friday, saying he hopes they will keep working together but declining to answer a question about Sutskever’s current position at the company.

    Altman and Brockman did not regain their board seats when they rejoined the company in November. But an “initial” new board of three men was formed, led by Taylor, a former Salesforce and Facebook executive who also chaired Twitter’s board before Elon Musk took over the platform. The others are former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo, the only member of the previous board to stay on.

    (Both Quora and Taylor’s new startup, Sierra, operate their own AI chatbots that rely in part on OpenAI technology.)

    After it retained the law firm in December, OpenAI said WilmerHale conducted dozens of interviews with the company’s prior board, current executives, advisers and other witnesses. The company also said the law firm reviewed thousands of documents and other corporate actions. WilmerHale didn’t immediately respond to a request for comment Friday.

    The board said it will also be making “improvements” to the company’s governance structure. It said it will adopt new corporate governance guidelines, strengthen the company’s policies around conflicts of interest, create a whistleblower hotline that will allow employees and contractors to submit anonymous reports and establish additional board committees.

    The company still has other troubles to contend with, including a lawsuit filed by Musk, who helped bankroll the early years of OpenAI and was a co-chair of its board after its 2015 founding. Musk alleges that the company is betraying its founding mission in pursuit of profits.

    Legal experts have expressed doubt about whether Musk’s arguments, centered around an alleged breach of contract, will hold up in court.

    But it has already forced open the company’s internal conflicts about its unusual governance structure, how “open” it should be about its research and how to pursue what’s known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

    Taylor said Friday that OpenAI’s “mission-driven nonprofit” structure won’t be changing as it continues to pursue its vision for artificial general intelligence that benefits “all of humanity.”

    “Our duties are to the mission, first and foremost, but the company — this amazing company that we’re in right now — was created to serve that mission,” Taylor said.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    [ad_2]

    Source link

  • OpenAI has ‘full confidence’ in CEO Sam Altman after investigation, reinstates him to board

    OpenAI has ‘full confidence’ in CEO Sam Altman after investigation, reinstates him to board

    [ad_1]

    OpenAI is reinstating CEO Sam Altman to its board of directors and said it has “full confidence” in his leadership after the conclusion of an outside investigation into the company’s turmoil.

    The ChatGPT maker tapped the law firm WilmerHale to look into what led the company to abruptly fire Altman in November, only to rehire him days later. After months of investigation, it found that Altman’s ouster was a “consequence of a breakdown in the relationship and loss of trust” between him and the prior board, OpenAI said in a summary of the findings Friday. It did not release the full report.

    OpenAI also announced it has added three women to its board of directors: Dr. Sue Desmond-Hellman, a former CEO of the Bill & Melinda Gates Foundation; Nicole Seligman, a former Sony general counsel; and Instacart CEO Fidji Simo.

    The actions are a way for the San Francisco-based artificial intelligence company to show investors and customers that it is trying to move past the internal conflicts that nearly destroyed it last year and made global headlines.

    “I’m pleased this whole thing is over,” Altman told reporters Friday, adding that he’s been disheartened to see “people with an agenda” leaking information to try to harm the company or its mission and “pit us against each other.” At the same time, he said he’s learned from the experience and apologized for a dispute with a former board member he could have handled “with more grace and care.”

    In a parting shot, two board members who voted to fire Altman before getting pushed out themselves wished the new board well but said accountability is paramount when building technology “as potentially world-changing” as what OpenAI is pursuing.

    “We hope the new board does its job in governing OpenAI and holding it accountable to the mission,” said a joint statement from ex-board members Helen Toner and Tasha McCauley. “As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

    For more than three months, OpenAI said little about what led its then-board of directors to fire Altman on Nov. 17. An announcement that day said Altman was “not consistently candid in his communications” in a way that hindered the board’s ability to exercise its responsibilities. He also was kicked off the board, along with its chairman, Greg Brockman, who responded by quitting his job as the company’s president.

    Much of OpenAI’s conflicts have been rooted in its unusual governance structure. Founded as a nonprofit with a mission to safely build futuristic AI that helps humanity, it is now a fast-growing big business still controlled by a nonprofit board bound to its original mission.

    The investigation found the prior board acted within its discretion. But it also determined that Altman’s “conduct did not mandate removal,” OpenAI said. It said both Altman and Brockman remained the right leaders for the company.

    “The review concluded there was a significant breakdown in trust between the prior board, and Sam and Greg,” Bret Taylor, the board’s chair, told reporters Friday. “And similarly concluded that the board acted in good faith, that the board believed at the time that its actions would mitigate some of the challenges that it perceived and didn’t anticipate some of the instability.”

    The dangers posed by increasingly powerful AI systems have long been a subject of debate among OpenAI’s founders and leaders. But citing the law firm’s findings, Taylor said Altman’s firing “did not arise out of concerns regarding product safety or security.”

    Nor was it about OpenAI’s finances or any statements made to investors, customers or business partners, Taylor said.

    Days after his surprise ouster, Altman and his supporters — with backing from most of OpenAI’s workforce and close business partner Microsoft — helped orchestrate a comeback that brought Altman and Brockman back to their executive roles and forced out board members Toner, a Georgetown University researcher; McCauley, a scientist at the RAND Corporation; and another co-founder, Ilya Sutskever. Sutskever kept his job as chief scientist and publicly expressed regret for his role in ousting Altman.

    “I think Ilya loves OpenAI,” Altman said Friday, saying he hopes they will keep working together but declining to answer a question about Sutskever’s current position at the company.

    Altman and Brockman did not regain their board seats when they rejoined the company in November. But an “initial” new board of three men was formed, led by Taylor, a former Salesforce and Facebook executive who also chaired Twitter’s board before Elon Musk took over the platform. The others are former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo, the only member of the previous board to stay on.

    (Both Quora and Taylor’s new startup, Sierra, operate their own AI chatbots that rely in part on OpenAI technology.)

    After it retained the law firm in December, OpenAI said WilmerHale conducted dozens of interviews with the company’s prior board, current executives, advisers and other witnesses. The company also said the law firm reviewed thousands of documents and other corporate actions. WilmerHale didn’t immediately respond to a request for comment Friday.

    The board said it will also be making “improvements” to the company’s governance structure. It said it will adopt new corporate governance guidelines, strengthen the company’s policies around conflicts of interest, create a whistleblower hotline that will allow employees and contractors to submit anonymous reports and establish additional board committees.

    The company still has other troubles to contend with, including a lawsuit filed by Musk, who helped bankroll the early years of OpenAI and was a co-chair of its board after its 2015 founding. Musk alleges that the company is betraying its founding mission in pursuit of profits.

    Legal experts have expressed doubt about whether Musk’s arguments, centered around an alleged breach of contract, will hold up in court.

    But it has already forced open the company’s internal conflicts about its unusual governance structure, how “open” it should be about its research and how to pursue what’s known as artificial general intelligence, or AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks.

    Taylor said Friday that OpenAI’s “mission-driven nonprofit” structure won’t be changing as it continues to pursue its vision for artificial general intelligence that benefits “all of humanity.”

    “Our duties are to the mission, first and foremost, but the company — this amazing company that we’re in right now — was created to serve that mission,” Taylor said.

    ___

    The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

    [ad_2]

    Source link

  • Sam Altman rejoins OpenAI board of directors as investigation into his ouster comes to a close

    Sam Altman rejoins OpenAI board of directors as investigation into his ouster comes to a close

    [ad_1]

    OpenAI on Friday announced its new board and the wrap-up of an internal investigation by U.S. law firm WilmerHale into the events leading up to OpenAI CEO Sam Altman’s ouster.

    Sam Altman will also rejoin OpenAI’s board.

    The new board members are:

    • Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, who is also on the Board of Directors at Pfizer and on the President’s Council of Advisors on Science and Technology.
    • Nicole Seligman, former EVP and Global General Counsel of Sony and President of Sony Entertainment, who is also on the Board of Directors at Paramount Global, Meira GTx and Intuitive Machines, Inc.
    • Fidji Simo, CEO and Chair of Instacart, who is also on the Board of Directors at Shopify.

    The three new members will “work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Greg, Sam, and OpenAI’s senior management,” according to a release.

    OpenAI will continue to expand the board moving forward, according to a Zoom call with reporters.

    OpenAI did not publish the investigation report but provided a summary of the findings.

    “The review concluded there was a significant breakdown of trust between the prior board and Sam and Greg,” Taylor said, adding that the review also “concluded the board acted in good faith… [and] did not anticipate some of the instability that led afterwards.”

    Taylor also said the board’s concerns did not arise regarding concerns over product safety and security, OpenAI’s finances or statements to customers or business partners, that it was “simply a breakdown in trust between the board and Mr. Altman.”

    WilmerHale’s investigation began in December, and the lawyers submitted their report today, which included dozens of interviews with OpenAI’s prior board members and advisors, current executives and other witnesses. The investigation also involved reviewing more than 30,000 documents, according to a release.

    “We have unanimously concluded that Sam and Greg are the right leaders for OpenAI,” Bret Taylor, chair of OpenAI’s board, said in a release.

    “I am very grateful to Bret and Larry and WilmerHale,” Altman said on the Zoom call with reporters. He added, speaking of CTO Mira Murati, “Mira in particular is incremental to OpenAI all the time … but through that period in November, she has done an amazing job helping to lead the company.”

    He added that he is “excited to be moving forward here” and for the situation to be “over.” He also mentioned he wished he had acted differently regarding differences in opinion with the board.

    In November, OpenAI’s board ousted Altman, prompting resignations – or threats of resignations – including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Adam D’Angelo, who had also voted to oust Altman, stayed on the board.

    When Altman was asked about Sutskever’s status on the Zoom call with reporters, he said there were no updates to share.

    “I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

    Since then, OpenAI has announced new board members, including former Salesforce co-CEO Bret Taylor and former Treasury Secretary Larry Summers. Microsoft obtained a nonvoting board observer position.

    After ChatGPT’s launch in November 2022, it broke records at the time as the fastest-growing consumer app in history, and now has about 100 million weekly active users, along with more than 92% of Fortune 500 companies using the platform, according to OpenAI. Last year, Microsoft invested an additional $10 billion in the company, making it the biggest AI investment of the year, according to PitchBook, and OpenAI has reportedly closed a deal that will allow employees to sell shares at an $86 billion valuation, though the deal reportedly took longer to close than expected due to the events surrounding Altman’s ouster.

    The rollercoaster couple of weeks at the company are still affecting it months later.

    This month, billionaire tech magnate Elon Musk sued OpenAI co-founders Sam Altman and Greg Brockman for breach of contract and breach of fiduciary duty, court filings revealed on Thursday.

    In his complaint, Musk and his attorneys allege that the ChatGPT maker “has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.” They also argue that this arrangement goes against a founding agreement and 2015 certification of incorporation that OpenAI established with Musk, who was a pivotal donor to a cofounder of OpenAI in its early years.

    As part of Microsoft’s contract with OpenAI, the tech giant only has rights to OpenAI’s “pre-AGI” technology, and it is up to OpenAI’s board to determine whether the company has reached that milestone. Musk argued in his filing that since the OpenAI board shuffle in November – when Toner, McCauley and Sutskever were removed – the new board is “ill-equipped” to independently determine whether OpenAI has reached AGI and therefore whether its technology is outside the scope of the exclusivity deal with Microsoft.

    Lawyers told CNBC that they had doubts about the legal viability of Musk’s case, and OpenAI has said it plans to file a motion to dismiss all of Musk’s claims.

    In response to the high-profile lawsuit, OpenAI reproduced old emails from Musk in which the Tesla and SpaceX CEO encouraged the rising startup to raise at least $1 billion in funding, and agreed that it should “start being less open” over time and “not share” the company’s science with the public.

    Musk’s lawsuit also follows some controversy over Altman’s previous chip endeavors and investments.

    Just before Altman’s brief ouster, he was reportedly seeking billions for a new and not-yet-formed chip venture code-named “Tigris” to eventually compete with Nvidia, traveling to the Middle East to raise money from investors.

    In 2018, Altman personally invested in an AI chip startup called Rain Neuromorphics, based near OpenAI’s San Francisco headquarters, and in 2019, OpenAI signed a letter of intent to spend $51 million on Rain’s chips. In December, the U.S. compelled a Saudi Aramco-backed venture capital firm to sell its shares in Rain.

    [ad_2]

    Source link

  • ITV Bosses Predict Post-Strike Commissioning “Blip” & Describe Generative AI As A “Co-Pilot” For Creatives

    ITV Bosses Predict Post-Strike Commissioning “Blip” & Describe Generative AI As A “Co-Pilot” For Creatives

    [ad_1]

    ITV boss Carolyn McCall has detailed the potential for a post-strike “blip” as commissioning conversations are restarted with U.S. buyers, while the head of ITV Studios described generative AI as a “co-pilot for creatives.”

    Speaking after the publication of ITV’s full-year results, which saw profits tumble by 32% amidst a tricky ad market, McCall said a factor “never really talked about” in relation to last year’s months-long U.S. labor strikes is that “commissioning conversations could not happen and we were not allowed to pitch ideas or discuss anything that would build a pipeline.”

    “This is never really talked about but is also quite an important factor in the market,” she said, adding that the more obvious impact of the strike physically delaying productions had been prominent in discussions.

    Pressed on whether this could lead to a surge in commissioning from U.S. buyers, McCall said “we are saying this is a blip and permanent conversations will start filtering through. I don’t know about a bump, but this is a blip.”

    The writers and actors strikes debilitated commissioning around the world and U.S. buyers are only just returning to a semblance of normality, while many major media congloms are posing existential questions over their place in the market.

    ITV has already said that the strikes will delay around £80M ($101M) of turnover from 2024 to 2025 but 2023 was a good year for its streamer shows. The proportion of revenue from SVoDs rose by 10 percentage points last year for production arm ITV Studios to 32%, which is already ahead of its five-year target to hit by 2026. Successes included Netflix drama Fool Me Once and Squid Game: The Challenge, although ITV is not increasing the target in response.

    Studios boss Julian Bellamy acknowledged a “tougher market” with “some free-to-air clients slowing commissioning,” but he said the production arm is “optimistic” in the medium term.

    He said ITV Studios is “outgrowing” the market on certain metrics such as content licensing and unscripted shows for streamers.

    AI “co-pilots”

    Bellamy spoke to the debate around generative AI and was bullish about how the growing tech can help rather than hinder ITV and its production arm.

    He described generative AI as a “co-pilot for creatives” but stressed it won’t be “substitutional.” “It won’t write the next Succession. We see it as something that over time will be helpful but is much more embryonic.” McCall, meanwhile, talked up how AI is helping streamer ITVX with targeted ads.

    The pair were speaking a few weeks after Slow Horses director James Hawes unveiled research showing that AI could write the entirety of a TV soap within three to five years, which comes after the BBC axed Doctors and Channel 4 parred back Hollyoaks.

    Too early to talk layoffs

    ITV’s profits tumbled by 32% last year due in the main to the tricky ad market. The broadcaster this morning revealed it is implementing a “strategic restructuring and efficiency programme across the group to reshape the cost base and enhance profitability.”

    Responding to a question from Deadline, McCall said it is “too early to say specifically” whether the program will lead to layoffs but “we will keep the market updated as we go through.”

    The program is designed to “de-risk” any “cyclical or structural” shocks that may occur in future, McCall said.

    “You can’t do that completely but what we are doing is building ourselves headroom and future-proofing,” she added. “That is why we have accelerated what we are doing and it is across everything.”

    By the end of 2024, ITV expects the program to have made savings of at least £50M per year. It said today it has more broadly delivered £130M of its £150M cost savings target by 2026 and will hit this figure a year early.

    [ad_2]

    Max Goldbart

    Source link

  • Do AI video-generators dream of San Pedro? Madonna among early adopters of AI’s next wave

    Do AI video-generators dream of San Pedro? Madonna among early adopters of AI’s next wave

    [ad_1]

    Whenever Madonna sings the 1980s hit “La Isla Bonita” on her concert tour, moving images of swirling, sunset-tinted clouds play on the giant arena screens behind her.

    To get that ethereal look, the pop legend embraced a still-uncharted branch of generative artificial intelligence – the text-to-video tool. Type some words — say, “surreal cloud sunset” or “waterfall in the jungle at dawn” — and an instant video is made.

    Following in the footsteps of AI chatbots and still image-generators, some AI video enthusiasts say the emerging technology could one day upend entertainment, enabling you to choose your own movie with customizable story lines and endings. But there’s a long way to go before they can do that, and plenty of ethical pitfalls on the way.

    For early adopters like Madonna, who’s long pushed art’s boundaries, it was more of an experiment. She nixed an earlier version of “La Isla Bonita” concert visuals that used more conventional computer graphics to evoke a tropical mood.

    “We tried CGI. It looked pretty bland and cheesy and she didn’t like it,” said Sasha Kasiuha, content director for Madonna’s Celebration Tour that continues through late April. “And then we decided to try AI.”

    ChatGPT-maker OpenAI gave a glimpse of what sophisticated text-to-video technology might look like when the company recently showed off Sora, a new tool that’s not yet publicly available. Madonna’s team tried a different product from New York-based startup Runway, which helped pioneer the technology by releasing its first public text-to-video model last March. The company released a more advanced “Gen-2″ version in June.

    Runway CEO Cristóbal Valenzuela said while some see these tools as a “magical device that you type a word and somehow it conjures exactly what you had in your head,” the most effective approaches are by creative professionals looking for an upgrade to the decades-old digital editing software they’re already using.

    He said Runway can’t yet make a full-length documentary. But it could help fill in some background video, or b-roll — the supporting shots and scenes that help tell the story.

    “That saves you perhaps like a week of work,” Valenzuela said. “The common thread of a lot of use cases is people use it as a way of augmenting or speeding up something they could have done before.”

    Runway’s target customers are “large streaming companies, production companies, post-production companies, visual effects companies, marketing teams, advertising companies. A lot of folks that make content for a living,” Valenzuela said.

    Dangers await. Without effective safeguards, AI video-generators could threaten democracies with convincing “deepfake” videos of things that never happened, or — as is already the case with AI image generators — flood the internet with fake pornographic scenes depicting what appear to be real people with recognizable faces. Under pressure from regulators, major tech companies have promised to watermark AI-generated outputs to help identify what’s real.

    There also are copyright disputes brewing about the video and image collections the AI systems are being trained upon (neither Runway nor OpenAI discloses its data sources) and to what extent they are unfairly replicating trademarked works. And there are fears that, at some point, video-making machines could replace human jobs and artistry.

    For now, the longest AI-generated video clips are still measured in seconds, and can feature jerky movements and telltale glitches such as distorted hands and fingers. Fixing that is “just a question of more data and more training,” and the computing power on which that training depends, said Alexander Waibel, a computer science professor at Carnegie Mellon University who’s been researching AI since the 1970s.

    “Now I can say, ‘Make me a video of a rabbit dressed as Napoleon walking through New York City,’” Waibel said. “It knows what New York City looks like, what a rabbit looks like, what Napoleon looks like.”

    Which is impressive, he said, but still far from crafting a compelling storyline.

    Before it released its first-generation model last year, Runway’s claim to AI fame was as a co-developer of the image-generator Stable Diffusion. Another company, London-based Stability AI, has since taken over Stable Diffusion’s development.

    The underlying “diffusion model” technology behind most leading AI generators of images and video works by mapping noise, or random data, onto images, effectively destroying an original image and then predicting what a new one should look like. It borrows an idea from physics that can be used to describe, for instance, how gas diffuses outward.

    “What diffusion models do is they reverse that process,” said Phillip Isola, an associate professor of computer science at the Massachusetts Institute of Technology. “They kind of take the randomness and they congeal it back into the volume. That’s the way of going from randomness to content. And that’s how you can make random videos.”

    Generating video is more complicated than still images because it needs to take into account temporal dynamics, or how elements within the video change over time and across sequences of frames, said Daniela Rus, another MIT professor who directs its Computer Science and Artificial Intelligence Laboratory.

    Rus said the computing resources required are “significantly higher than for still image generation” because “it involves processing and generating multiple frames for each second of video.”

    That’s not stopping some well-heeled tech companies from trying to keep outdoing each other in showing off higher-quality AI video generation at longer durations. Requiring written descriptions to make an image was just the start. Google recently demonstrated a new project called Genie that can be prompted to transform a photograph or even a sketch into “an endless variety” of explorable video game worlds.

    In the near term, AI-generated videos will likely show up in marketing and educational content, providing a cheaper alternative to producing original footage or obtaining stock videos, said Aditi Singh, a researcher at Cleveland State University who has surveyed the text-to-video market.

    When Madonna first talked to her team about AI, the “main intention wasn’t, ’Oh, look, it’s an AI video,’” said Kasiuha, the creative director.

    “She asked me, ‘Can you just use one of those AI tools to make the picture more crisp, to make sure it looks current and looks high resolution?’” Kasiuha said. “She loves when you bring in new technology and new kinds of visual elements.”

    Longer AI-generated movies are already being made. Runway hosts an annual AI film festival to showcase such works. But whether that’s what human audiences will choose to watch remains to be seen.

    “I still believe in humans,” said Waibel, the CMU professor. ”I still believe that it will end up being a symbiosis where you get some AI proposing something and a human improves or guides it. Or the humans will do it and the AI will fix it up.”

    ————

    Associated Press journalist Joseph B. Frederick contributed to this report.

    [ad_2]

    Source link

  • Sports analytics may be outnumbered when it comes to artificial intelligence

    Sports analytics may be outnumbered when it comes to artificial intelligence

    [ad_1]

    BOSTON — When it comes to artificial intelligence, the sports analytics crowd may be outnumbered.

    The people who killed the sacrifice bunt and turned NBA games into a 3-point shooting contest aren’t quite sure what will happen when AI fully invades sports — whether in the front office or on the field.

    “I’ve been in computer science a long time. This is the first thing we don’t understand,” Philadelphia 76ers team president Daryl Morey said Friday at the MIT Sloan Sports Analytics Conference.

    “That’s mind-boggling,” Morey said. “We’ve actually now created something, with 0’s and 1’s, where every step we’ve made the creation, but we don’t understand the results.”

    The MIT conference annually brings together thousands of number-crunching sports nerds, who turn their data models loose on hot topics such as diversity, gambling or reversing the slowing pace of baseball games. But this year’s gathering had a decidedly AI focus, with panels and working papers on the potential for generative artificial intelligence to transform sports.

    One talk looked at baseball strategy, another on how to provide Olympic content for the 200-plus countries competing in four dozen different sports, and a research paper used AI to provide player tracking data from a soccer broadcast.

    Morey, one of the conference’s founders, was on a panel called “Winning with AI: The future of AI in sports.” The discussion touched on potential for improvements in scheduling, player safety, advertising, ticket sales and broadcasts that convert the on-field action into a Disney cartoon.

    Kevin Lopes, an ESPN vice president for development and innovation, compared AI to the iPhone, which transformed everyday life by giving everyone with some coding skills the chance to come up with their own applications.

    “I think about that when I think about generative AI,” Lopes said. “I don’t think anyone quite knows what that is yet. That’s fascinating to me, and what’s going to be the next thing.

    “We exist in this moment in history, in my humble opinion, that every day we’re seeing new incremental innovations in AI,” he said. “What’s it going to be for the freshman at MIT in two years.”

    Make no mistake, though: AI is already here.

    Ballplayer-turned-broadcaster Carlos Peña said AI can be used to help a batter eliminate his blind spots. (But players will resist, he said, unless it’s stripped of its “mathiness” and translated into simple guidance such as, “look for the fastball up and in.”)

    Anticipating complaints from scouts and others who say analytics can’t replace intuition, Pena said: “That’s not what we’re trying to do here. What we’re trying to do is enhance intuition.”

    Christopher Jackson, the head of digital data and analytics for the Olympics, said AI can help create website content to satisfy fans of lower-profile sports from far-flung countries that usually don’t draw the attention of the mainstream media. One problem: Olympic planning is measured in decades, while major changes in AI come around every six months or so.

    Amazon Web Services global head Julie Souza said the NFL is saving $2 million a year by running AI on its schedule, which has 1 quadrillion — that’s a one followed by 15 zeros — potential options that must account for holidays, shared stadiums and travel. AI is already dissecting which plays — and even which bodily poses — are most likely to cause injuries in a game, she said.

    “Rules are changing to make the game safer, to make the players more protected,” Souza said, adding that the information can trickle up from to the military and others.

    “It’s not just the NFL benefitting from this,” she said. “We’re not going back. There’s no way to go back from this. We’re just learning more and making the game safer. Which is great not just for the league, but for us on fans to have our players on the field more.”

    Morey said the 76ers use AI for productivity — speeding up routine tasks — but it isn’t quite sharp enough yet to outdo the humans they have trying to improve their predictive modeling. “We haven’t found a ton there, but that will change,” he said.

    And they will keep trying.

    “There’s a lot of scary things with this, but it sort of is. This is happening,” Morey said. “There isn’t going to be a way to do all the safety stuff. … There isn’t actually going to be any way to control it. You really just lean into it, honestly, to help your business, to help what you’re doing.

    “And there could be a very scary thing you hit. But what’s the alternative? Not embracing it?” he said. “That makes no sense.”

    ___

    AP sports: https://apnews.com/sports

    [ad_2]

    Source link

  • Humanoid robot-maker Figure partners with OpenAI and gets backing from Jeff Bezos

    Humanoid robot-maker Figure partners with OpenAI and gets backing from Jeff Bezos

    [ad_1]

    ChatGPT-maker OpenAI is looking to fuse its artificial intelligence systems into the bodies of humanoid robots as part of a new deal with robotics startup Figure.

    Sunnyvale, California-based Figure announced the partnership Thursday along with $675 million in venture capital funding from a group that includes Amazon founder Jeff Bezos as well as Microsoft, chipmaker Nvidia and the startup-funding divisions of Intel and OpenAI.

    Figure is less than two years old and doesn’t have a commercial product but is persuading influential tech industry backers to support its vision of shipping billions of human-like robots to the world’s workplaces and homes.

    “If we can just get humanoids to do work that humans are not wanting to do because there’s a shortfall of humans, we can sell millions of humanoids, billions maybe,” Figure CEO Brett Adcock told The Associated Press last year.

    For OpenAI, which dabbled in robotics research before pivoting to a focus on the AI large language models that power ChatGPT, the partnership will “open up new possibilities for how robots can help in everyday life,” said Peter Welinder, the San Francisco company’s vice president of product and partnerships, in a written statement.

    Financial terms of the deal between Figure and OpenAI weren’t disclosed. The collaboration will have OpenAI building specialized AI models for Figure’s humanoid robots, likely based on OpenAI’s existing technology such as GPT language models, the image-generator DALL-E and the new video-generator Sora.

    That will help “accelerate Figure’s commercial timeline” by enabling its robots to “process and reason from language,” according to Figure’s announcement. The company announced in January an agreement with BMW to put its robots to work at a car plant in Spartanburg, South Carolina, but hadn’t yet determined exactly how or when they would be used.

    Robotics experts differ on the usefulness of robots shaped in human form. Most robots employed in factory and warehouse tasks might have some animal-like features — a robotic arm, finger-like grippers or even legs — but aren’t truly humanoid. That’s in part because it’s taken decades for robotics engineers to develop robots that can walk effectively on two legs or reliably manipulate small objects.

    Whitney Rockley, co-founder and managing partner of Toronto-based venture capital firm McRock Capital, said she understands the appeal of humanoids because they’re relatable, evoking emotions and starting conversations. In practice, however, she said they’re still awkward and pose huge technical challenges, which is why she’s sticking to investing in non-humanoid robots.

    “We look at robotics and automation really practically and say, ‘What kind of timeline are we willing to commit to in order to really see commercial liftoff and deployments and applications?’” Rockley said. “And I think that the groups that are backing a lot of humanoid solutions right now, they’re in there for the long haul, which is great because you need that, but it’s going to take decades upon decades.”

    OpenAI CEO Sam Altman hinted at a renewed interest in robotics in a podcast hosted by Microsoft co-founder Bill Gates and released early this year in which Altman said the company was starting to invest in promising robotics hardware platforms after having earlier abandoned its own research.

    “We started robots too early and so we had to put that project on hold,” Altman told Gates, noting that “we were dealing with bad simulators and breaking tendons” that were distracting from the company’s other work.

    “We realized more and more over time that what we really first needed was intelligence and cognition and then we could figure out how we could adapt it to physicality,” he said.

    ——-

    The AP has signed a deal with OpenAI for it to access its news archive.

    [ad_2]

    Source link

  • Larry Magid: I like generative AI, but I can write my own correspondence and columns, thank you

    Larry Magid: I like generative AI, but I can write my own correspondence and columns, thank you

    [ad_1]

    As regular readers of this column know, I am pretty bullish on generative AI (GAI). I’ve spent many hours using products like ChatGPT, Google Gemini and Microsoft Copilot to make travel plans, get product information, get ideas for meal planning along with recipes and lots more. I’ve also used DALLE-2, the image generation program built into the $20 a month version of ChatGPT to create images for my website, holiday cards and illustrations for presentation slides.

    [ad_2]

    Larry Magid

    Source link

  • Microsoft’s new deal with France’s Mistral AI is under scrutiny from the European Union

    Microsoft’s new deal with France’s Mistral AI is under scrutiny from the European Union

    [ad_1]

    LONDON — The European Union is looking into Microsoft’s partnership with French startup Mistral AI as part of its broader review of the booming generative artificial intelligence sector to see if it raises any competition concerns.

    The 27-nation bloc’s executive commission said Tuesday in a brief statement that it’s analyzing the agreement between the two companies announced a day earlier. Microsoft declined to comment. Mistral did not respond to a request for comment.

    Microsoft said Monday it was teaming up with Mistral through a 15 million euro ($16 million) investment in the French company, which emerged less than a year ago. The agreement could cut the U.S. software giant’s reliance on ChatGPT-maker OpenAI for supplying the next wave of chatbots and other generative AI products.

    The commission, the EU’s top antitrust enforcer, said it’s including the deal as part of its broader review of the generative AI market. It’s examining agreements between digital tech giants and generative AI developers and providers.

    The EU last month started looking into Microsoft’s multibillion deal with San Francisco-based OpenAI, which could lead to a formal merger investigation.

    [ad_2]

    Source link

  • Babbily: Revolutionizing the AI Landscape With a User-Friendly Approach

    Babbily: Revolutionizing the AI Landscape With a User-Friendly Approach

    [ad_1]

    “Babbily was born out of this vision, with a commitment to making AI simple, approachable, and useful for everyone, regardless of their technical expertise.”

    In an era where artificial intelligence (AI) is reshaping industries and daily lives, Babbily emerges as a pioneering platform, democratizing access to AI technology. With its official launch, Babbily aims to bridge the gap between the complex world of AI and the everyday user, offering a suite of tools designed for simplicity and effectiveness.

    Babbily’s comprehensive features, including Chat, Image Chat, Image Generator, Text-to-Speech, and specially designed Easy Tools, cater to a wide range of needs, from enhancing productivity and creativity to facilitating learning. The platform’s intuitive interface ensures that users, regardless of their technical background, can harness the power of AI without the daunting learning curve often associated with such technology.

    To introduce users to the seamless experience of Babbily, the platform is offering a 7-day free trial, inviting individuals to explore the various functionalities and discover the potential of AI in their personal and professional lives.

    Chris Crawford, CEO of Babbily, shared his insights on the motivation behind the creation of the platform: “The inspiration for Babbily stemmed from a recognition of the rapid advancement of AI and the realization that it remained an intimidating concept for many. Our goal was to develop a platform that not only harnessed the power of AI but also presented it in a user-friendly manner. Babbily was born out of this vision, with a commitment to making AI simple, approachable, and useful for everyone, regardless of their technical expertise. We believe that AI should be a tool that empowers, not overwhelms, and Babbily is our answer to that belief.”

    Babbily stands as a testament to the potential of AI when made accessible and user-friendly. It is not just a tool; it is a companion in the digital age, ready to assist users in navigating the ever-evolving landscape of AI. Whether you’re a student seeking academic support, a professional looking to streamline workflows, or simply a curious individual exploring the possibilities of AI, Babbily is your gateway to unlocking the transformative power of artificial intelligence.

    Embark on your AI journey today with Babbily’s 7-day free trial at https://babbily.com/.

    About Babbily Babbily is an AI platform designed to make artificial intelligence simple and accessible for everyone. With a focus on user-friendliness and versatility, Babbily offers a range of features to enhance productivity, creativity, and learning. For more information, visit https://babbily.com/.

    Source: Babbily

    Related Media

    [ad_2]

    Source link

  • Microsoft is giving Windows Photos a boost with a generative AI-powered eraser

    Microsoft is giving Windows Photos a boost with a generative AI-powered eraser

    [ad_1]

    Microsoft has announced a generative-AI powered eraser for pictures, which gives you an easy way of removing unwanted elements from your photos. Windows Photos has long had a Spot Fix tool that can remove parts of an image for you, but the company says Generative erase is an enhanced version of the feature. Apparently, this newer tool can create “more seamless and realistic” results even when large objects, such as bystanders or clutter in the background, are removed from an image.

    If you’ll recall, both Google and Samsung have their own versions of AI eraser tools on their mobile devices. Google’s used to be exclusively available on newer Pixel phones until it was rolled out to older models. Microsoft’s version, however, gives you access to an AI-powered photo eraser on your desktop or laptop computer. You only need to fire up the image editor in Photos to start using the feature. Simply choose the Erase option and then use the brush to create a mask over the elements you want to remove. You can even adjust the brush size to make it easier to select thinner or thicker objects, and you can also choose to highlight more than one element before erasing them all.

    At the moment, though, access to Generative erase is pretty limited. It hasn’t been released widely yet, and you can only use it if you’re a Windows Insider through the Photos app on Windows 10 and Windows 11 for Arm64 devices.

    Photo of a dog against a beach background.

    Microsoft

    undefined

    [ad_2]

    Mariella Moon

    Source link

  • Google promises to fix Gemini’s image generation following complaints that it’s ‘woke’

    Google promises to fix Gemini’s image generation following complaints that it’s ‘woke’

    [ad_1]

    Google’s Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user’s text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it’s aware Gemini “is offering inaccuracies in some historical image generation depictions” and that it’s going to fix things immediately.

    According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.” To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America’s founding fathers and the Catholic Church’s popes as people of color.

    In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn’t get Gemini to generate Nazi images. “I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party,” the chatbot responded.

    Gemini’s behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its “image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously.” He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that “[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that.”

    [ad_2]

    Mariella Moon

    Source link

  • Nvidia’s 4Q revenue, profit soar thanks to demand for its chips used for artificial intelligence

    Nvidia’s 4Q revenue, profit soar thanks to demand for its chips used for artificial intelligence

    [ad_1]

    SANTA CLARA, Calif. — Nvidia Corp., which has seen its value skyrocket over the past year thanks to soaring demand for its graphics chips used for artificial intelligence, posted stronger-than-expected results Wednesday for its latest quarter, with its revenue more than tripling from a year earlier.

    Nvidia reported revenue for its fiscal fourth quarter that ended Jan. 28 of $22.1 billion, up from $6.05 billion.

    The company based in Santa Clara, California, earned $12.29 billion, compared to a profit of $1.41 billion a year ago.

    Adjusted for one-time items, Nvidia earned $5.16 a share in the latest quarter, that topped Wall Street forecasts for $4.59 per share, based on analysts surveyed by FactSet Research. Analysts had expected revenue of $20.4 billion for the period that concluded the company’s fiscal year.

    The company’s specialized chips are key components that help power different forms of artificial intelligence, including the latest generative AI chatbots such as ChatGPT and Google’s Gemini.

    “Accelerated computing and generative AI have hit the tipping point,” said Jensen Huang, founder and CEO of Nvidia, in a statement. “Demand is surging worldwide across companies, industries and nations.”

    Nvidia carved out an early lead in the hardware and software needed to tailor its technology to AI applications, partly because Huang began to nudge the company into what was then seen as a still half-baked technology more than a decade ago. It also makes chips for gaming and cars.

    Huang looked at ways that Nvidia chipsets known as graphics processing units might be tweaked for AI-related applications to expand beyond their early inroads in video gaming.

    “Another blockbuster quarter from Nvidia raises the question of how long its soaring performance will last,” said Insider Intelligence analyst Jacob Bourne. “It has a massive lead in the growing global AI chip sector but can’t rest on its laurels.”

    Bourne said Nvidia faces a number of challenges, including broader economic uncertainty, tech giants’ drive to make their own AI chips and emerging rivals. But he said the company’s market strength, for the near future, is “durable”

    For the current quarter, Nvidia expects revenue of about $24 billion. Analysts are currently expecting Nvidia to post revenue of $22.2 billion for the February-April period. The company expects “continued growth” to 2025 and beyond.

    The company said its data center revenue grew in all regions except China, where the U.S. government has imposed export regulations.

    “Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market,” Huang said in a conference call with analysts.

    Nvidia relies heavily on the world’s biggest maker of computer chips, the Taiwan Semiconductor Manufacturing Company, to churn out the chips that Nvidia designs.

    Taiwan’s Taiex benchmark index last week jumped 3% to a record high, buoyed by a surge in TSMC’s share price.

    The leap came after Morgan Stanley analysts raised their price target on Nvidia’s stock to $750 from $603, citing an increase in demand for AI chips.

    “Generative AI has kicked off a whole new investment cycle to build the next $1 trillion of infrastructure of AI generation factories,” Huang said. “We believe these two trends will drive a doubling of the world’s data center infrastructure installed base in the next five years and will represent an annual market opportunity in the hundreds of billions.”

    Nvidia’s shares jumped 7.5%, to $726 in after-hours trading.

    [ad_2]

    Source link

  • Ready to go beyond Google? Here’s how to use new generative AI search sites

    Ready to go beyond Google? Here’s how to use new generative AI search sites

    [ad_1]

    LONDON — It’s not just you. A lot people think Google searches are getting worse. And the rise of generative AI chatbots is giving people new and different ways to look up information.

    While Google has been the one-stop shop for decades — after all, we commonly call searches “googling” — its longtime dominance has attracted a flood of sponsored or spammy links and junk content fueled by “search engine optimization” techniques. That pushes down genuinely useful results.

    A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties.

    Now, chatbots powered by generative artificial intelligence, including from Google itself, are poised to shake up how search works. But they have their own issues: Because the tech is so new, there are concerns about AI chatbots’ accuracy and reliability.

    If you want to try the AI way, here’s a how-to:

    Google users don’t have to look far. The company last year launched its own AI chatbot assistant, known as Bard, but recently retired that name and replaced it with a similar service, Gemini.

    Bard users are now redirected to the Gemini site, which can be accessed directly on desktop or mobile browsers.

    The Gemini app also launched in the U.S. this month and is rolling out in Japanese, Korean and English globally — except in Britain, Switzerland and Europe — according to an update notice, which hints that more countries and languages will be “coming soon.”

    Google also has been testing out a new search offering, dubbed “Search Generative Experience” that replaces links with an AI-generated snapshot of key info. But it’s limited to U.S. users signing up through its experimental Labs site.

    Microsoft’s Bing search engine has provided generative AI searches powered by OpenAI’s ChatGPT technology for about a year, first under the name Bing Chat, now rebranded as Copilot.

    On the Bing search home page, click the Chat or Copilot button underneath the search window and you’ll get a conversational interface where you type your question. There’s also a Copilot app.

    A slew of startup AI search sites have emerged, but they aren’t as easy to find. A standard Google search isn’t that helpful, but searches on Copilot and Bard turned up a number of names, including Perplexity, HuggingChat, You.com, Komo, Andi, Phind, Exa and AskAI.

    Most of these services have free versions. They typically limit how many queries you can make but offer premium levels that provide smarter AI and more features.

    Gemini users, for example, can pay $20 for the advanced version, which comes with access to its “most capable” model, Ultra 1.0.

    Gemini users need to be signed in to their Google accounts and be at least 13 years old — 18 in Europe or Canada. Copilot users don’t have to to sign in to a Microsoft account and can access the service through Bing search or Copilot home pages.

    Startup sites are largely free to use and don’t require setting up an account. Many also have premium levels.

    Rather than typing in a string of keywords, AI queries should be conversational — for example, “Is Taylor Swift the most successful female musician?” or “Where are some good places to travel in Europe this summer?”

    Perplexity advises using “everyday, natural language.” Phind says it’s best to ask “full and detailed questions” that start with, say, “what is” or “how to.”

    If you’re not satisfied with an answer, some sites let you ask follow up questions to zero in on the information needed. Some give suggested or related questions.

    Microsoft’s Copilot lets you choose three different chat styles: creative, balanced or precise.

    Unlike Google search results that throw up a list of links, including sponsored ones, AI chatbots spit out a readable summary of the information, sometimes with a few key links as footnotes. The answers will vary — sometimes widely — depending on the site.

    They can shine when you’re searching for an obscure factoid, such as, say, a detail about a European Union policy.

    Answers from Phind.com were among the most readable and consistently were provided in a narrative form. But the site has mysteriously gone offline at some points.

    Testing a simple query — what’s the average temperature in London for the second half of February? — produced a similar range of results on most sites: 7-9 degrees Celsius (45-48 Fahrenheit).

    Andi strangely provided current weather conditions for New York, though it used the correct city during another try later.

    Another search — the names and tenures of the CEOs of British luxury car maker Aston Martin — is the kind of info available online but needs some work to piece together.

    Most sites came up with names from the past decade or two. AskAI provided a list dating to 1947, along with its top three “authoritative sources,” but without links.

    While chatbots may sound authoritative because they produce answers that seem like they’re written by a confident human, they’re not always correct. AI chatbots have been known for providing deceptively convincing responses, dubbed “hallucinations.” HuggingChat warns, “Generated content may be inaccurate or false” and Gemini says it could “display inaccurate info, including about people.”

    These AI systems scan vast pools of information culled from the web, known as large language models, and then use algorithms to come up with coherent answers, but not all reveal how they arrived at their responses.

    Some AI chatbots disclose the models that their algorithms have been trained on. Others provide few or no details. The best advice is to try more than one and compare the results, and always double-check sources.

    For example, at one point Komo insisted Canada’s population in 1991 was about 1 million people and stood by this wrong number even after I followed up to ask if it was sure. It cited a Wikipedia page, which revealed the figure came from a table for the country’s indigenous population. It found the correct number when I tried again later.

    ___

    Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

    [ad_2]

    Source link