ReportWire

Tag: Github

  • Meta’s Bold Strategy to Beat OpenAI Starts With These 8 AI Innovators

    [ad_1]

    OpenAI might be the center of the AI development world these days, but the competition has been heating up for quite a while. And few competitors are bankrolled on the same level as Meta. With a market capitalization of more than $1.75 trillion and a CEO who’s not afraid to spend heavily, Meta has been on a hiring spree in the AI world for months, poaching top tier talent from a variety of competitors.

    It appeared recently that the wave of high-profile (and high-dollar) recruitments was coming to an end. In August, Meta quietly announced a freeze on hiring after adding roughly 50 AI researchers and engineers. This month, though, two more big names have joined the Meta roster.

    While Meta might have a gap to close with its AI rivals, the company has assembled an all-star team to catch up and move forward. Here are some of the most notable experts to come on board.

    Andrew Tulloch, co-founder of Thinking Machines Lab

    Tulloch partnered with OpenAI’s former chief technical officer Mira Murati to launch Thinking Machines Lab in February of this year. Now he’s returning to his roots. Considered a leading researcher in the AI field, Tulloch previously spent 11 years at Meta, leaving in 2023 to join OpenAI, then departing with Murati. Meta founder Mark Zuckerberg has been chasing Tulloch for a while, reportedly making an offer with a $1.5 billion compensation package at one point, which Tulloch rejected. (Meta has called the description of the offer “inaccurate and ridiculous.”) There’s no word on what Tulloch was offered that made him decide to move.

    Ke Yang, Senior Director of Machine Learning at Apple

    Yang, who was appointed to lead Apple’s AI-driven web search effort just weeks ago, is another big October Meta hire. At Apple, his team (Answers, Knowledge and Information, or AKI) was working to make Siri more Chat-GPT-like by pulling that information from the web, making his departure one of Meta’s most notable poachings. Meta convinced him to come over after recruiting several of his colleagues.

    Shengjia Zhao, co-creator of OpenAI’s ChatGPT

    Zhao joined Meta in June to serve as chief scientist of Meta Superintelligence Labs. Beyond co-creating ChatGPT, he also played a role in building GPT-4 and led synthetic data at OpenAI for a stint. “Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field,” Zuckerberg wrote in a social media post in July. “I’m looking forward to working closely with him to advance his scientific vision.”

    Daniel Gross, co-founder of Safe Superintelligence

    As it did with Murati’s Thinking Machines Lab, Meta tried to acquire Safe Superintelligence, the AI startup co-founded by OpenAI’s former chief scientist, Ilya Sutskever. When that offer was rejected, Zuckerberg began looking for talent, luring co-founder and CEO Gross in June. Gross is working on AI products for Meta’s superintelligence group. By joining Meta, he’s reunited with former GitHub CEO Nat Friedman, with whom he once created the venture fund NFDG.

    Ruoming Pang, Apple’s head of AI models

    Pang was one of the first high-profile departures from Apple to Meta, making the jump in July. At the time, he was Apple’s top executive overseeing AI models and had been with the company since 2021. While there, he helped develop the large language model that powers Apple Intelligence and other AI features, such as email and webpage summaries.

    Matt Deitke, co-founder of Vercept

    Vercept is a start-up that’s attempting to build AI agents that use other software to autonomously perform tasks, something that caught Zuckerberg’s attention. Deitke proved hard to lure, though. He reportedly turned down a $125 million, four-year offer, but a direct appeal by Zuckerberg (and a reported doubling of that offer) convinced him to make the move (with the blessing of his peers). Kiana Ehsani, his co-founder and CEO, announced his departure on social media, joking, “We look forward to joining Matt on his private island next year.”

    Alexandr Wang, founder and CEO of Scale AI

    Wang left his startup to join Meta after the social media company made a $14.3 billion investment into Scale AI (without any voting power in the company). “As you’ve probably gathered from recent news, opportunities of this magnitude often come at a cost,” Wang wrote in a memo to staff. “In this instance, that cost is my departure.” Wang joined Meta’s superintelligence unit. Scale made its name by helping companies like OpenAI, Google and Microsoft prepare data used to train AI models. Meta was already one of its biggest customers.

    Nat Friedman, former CEO of GitHub

    Friedman was already a part of Meta’s Advisory Group before he was brought on full-time. That external advisory council provides guidance on technology and product development. Now, he’s working with Wang to run the superintelligence unit. Friedman previously was CEO of GitHub, a cloud-based platform that hosts code for software development. Most recently, he was a board member at the AI investment firm he started with Safe Superintelligence’s Gross.

    As for what Zuck is going to do with all this talent, the sky’s the limit, but there’s some catchup to do first. The Llama Large Language Model hasn’t quite matched up to those of OpenAI or Google, but with Meta’s gargantuan user base (3.4 billion people use one of the company’s apps each day), Meta’s AI could still be one of the most widely used in the years to come. 

    [ad_2]

    Chris Morris

    Source link

  • North Korean Scammers Are Doing Architectural Design Now

    [ad_1]

    “The plans are being used and being built,” says Michael “Barni” Barnhart, a leading authority in North Korean hacking and cyber threats, who works for insider threat security firm DTEX. Along with other DPRK researchers, who call themselves a “Misfit” alliance, Barnhart has seen this cluster of workers conducting architectural work and says similar other efforts have been detected. “They will do the CAD renderings, they’ll do the drawings,” he says. “It’s not like a hypothetical—those physical things do exist out there.”

    Barnhart—who previously found North Korean animators appearing to work on Amazon and Max shows—says that he has also seen potential front companies set up to help run the operations and provide a veneer of legitimacy. The findings raise questions about the quality of the structural work and concerns about safety, if structures are created in the physical world. “In some of our investigations, these plans and these products that they’re making for these remodels and renderings, they’re not getting good reviews,” Barnhart says. “We do have indications that also they’re being hired to do critical infrastructure.”

    One 24-minute long screen recording seen by WIRED shows how the freelance operation could work. In the video, a person signs up to a freelance work website and sets up a new profile where they write that they are a “licensed structural engineer/architect in the USA.” They pick a profile image from a folder of potentially downloaded files, translate text between English and Korean, and access a Social Security number generator website during the sign-up process.

    When their account is created, the video shows them start to message online requests for work, with one message saying: “I can provide you [sic] permit drawing plan set for your residential home design within a few days.”

    Other screen recordings show the workers having conversations with potential clients, and in at least one instance there is a recording of an online call discussing possible work. The Kela researcher, who asked not be named for security reasons, says it appeared some prospective customers returned to the scammers after likely having work completed. The researchers say some kinds of work appeared to be priced from a few hundred dollars up to around $1,000 per job.

    “This is an opportunistic nation,” DTEX’s Barnhart says. While many companies have started to figure out that North Korea’s IT workers are often applying for remote tech jobs, using false identities, deepfakes on video calls, and local workers to run their operations, they are consistently changing their approaches. Barnhart says it appears that architectural work has been successful for the alleged DPRK workers and that evidence shows the IT workers program can be more subtle than trying to get hired at companies.

    “They’re moving to places where we’re not looking,” Barnhart says. “They’re also doing things like call centers. They’re doing HR and payroll and accounting. Things that are just remote roles and not necessarily remote hires.”

    [ad_2]

    Matt Burgess

    Source link

  • You Can Play The OG Diablo In Your Browser For Free Right Now

    You Can Play The OG Diablo In Your Browser For Free Right Now

    [ad_1]

    Image: Blizzard / Kotaku

    It’s now possible to quickly and easily play the original 1997 Diablo on your PC or phone via a simple website. Just load it up on your browser and you can start killing demons and skeletons like it’s the ‘90s all over again.

    The original Diablo was developed by Blizzard North and released in January 1997 for PC. Its single dungeon, evil monsters, creepy town, and loot-filled catacombs forever changed the action RPG genre. Today, the OG Diablo might seem a bit small and simple compared to the wild open-world adventure we find in 2023’s Diablo 4. But Diablo’s vibes are still unmatched by any of its sequels, and now you can experience the classic ARPG for free on your phone or PC browser.

    As spotted by PC Gamer, a new website has popped up that lets you play the shareware version of the original Diablo in your browser. This new web-based port of the game was built using Diablo’s original source code, which was previously reconstructed by GalaXyHaXz and the Devilution team and can be found on GitHub.

    Blizzard / Izie

    Now, keep in mind that unless you own Diablo and upload the “DIABDAT.MPQ” file, you won’t have access to everything found in the retail release. Still, the shareware version of Diablo lets you play as a warrior who can’t talk to NPCs, but can kill demons and loot weapons in the dungeon under the church in Tristram.

    In my testing, this browser-based port of Diablo plays really well. I had no issues exploring the dark corridors and killing zombies and skeletons. Just toss your old Diablo save and DIABDAT.MPQ file onto a service like Google Drive or a USB stick and you can play Blizzard’s seminal ARPG anywhere with an internet connection.

    In fact, you could be playing Diablo right now on the device you are currently using instead of working or reading the last sentence of this blog.

    .

    [ad_2]

    Zack Zwiezen

    Source link

  • Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

    Nintendo blitzes GitHub with over 8,000 emulator-related DMCA takedowns

    [ad_1]

    Nintendo sent a Digital Millennium Copyright Act (DMCA) notice for over 8,000 GitHub repositories hosting code from the Yuzu Switch emulator, which the Zelda maker previously described as enabling “piracy at a colossal scale.” The sweeping takedown comes two months after Yuzu’s creators quickly settled a lawsuit with Nintendo and its notoriously trigger-happy legal team for $2.4 million.

    GamesIndustry.biz first reported on the DMCA notice, affecting 8,535 GitHub repos. Redacted entities representing Nintendo assert that the Yuzu source code contained in the repos “illegally circumvents Nintendo’s technological protection measures and runs illegal copies of Switch games.”

    GitHub wrote on the notice that developers will have time to change their content before it’s disabled. In keeping with its developer-friendly approach and branding, the Microsoft-owned platform also offered legal resources and guidance on submitting DMCA counter-notices.

    Nintendo’s legal blitz, perhaps not coincidentally, comes as game emulators are enjoying a resurgence. Last month, Apple loosened its restrictions on retro game players in the App Store (likely in response to regulatory threats), leading to the Delta emulator establishing itself as the de facto choice and reaching the App Store’s top spot. Nintendo may have calculated that emulators’ moment in the sun threatened its bottom line and began by squashing those that most immediately imperiled its income stream.

    Sadly, Nintendo’s largely undefended legal assault against emulators ignores a crucial use for them that isn’t about piracy. Game historians see the software as a linchpin of game preservation. Without emulators, Nintendo and other copyright holders could make a part of history obsolete for future generations, as their corresponding hardware will eventually be harder to come by.

    [ad_2]

    Will Shanklin

    Source link

  • AI is keeping GitHub chief legal officer Shelley McKinley busy | TechCrunch

    AI is keeping GitHub chief legal officer Shelley McKinley busy | TechCrunch

    [ad_1]

    GitHub’s chief legal officer, Shelley McKinley, has plenty on her plate, what with legal wrangles around its Copilot pair-progammer, as well as the Artificial Intelligence (AI) Act, which was voted through the European Parliament this week as “the world’s first comprehensive AI law.”

    Three years in the making, the EU AI Act first reared its head back in 2021 via proposals designed to address the growing reach of AI into our everyday lives. The new legal framework is set to govern AI applications based on their perceived risks, with different rules and stipulations depending on the application and use-case.

    GitHub, which Microsoft bought for $7.5 billion in 2018, has emerged as one of the most vocal naysayers around one very specific element of the regulations: muddy wording on how the rules might create legal liability for open source software developers.

    McKinley joined Microsoft in 2005, serving in various legal roles including hardware businesses such as Xbox and Hololens, as well as general counsel positions based in Munich and Amsterdam, before landing in the Chief Legal officer hotseat at GitHub coming up for three years ago.

    “I moved over to GitHub in 2021 to take on this role, which is a little bit different to some Chief Legal Officer roles — this is multidisciplinary,” McKinley told TechCrunch. “So I’ve got standard legal things like commercial contracts, product, and HR issues. And then I have accessibility, so [that means] driving our accessibility mission, which means all developers can use our tools and services to create stuff.”

    McKinley is also tasked with overseeing environmental sustainability, which ladders directly up to Microsoft’s own sustainability goals. And then there are issues related to trust and safety, which covers things like moderating content to ensure that “GitHub remains a welcoming, safe, positive place for developers,” as McKinley puts it.

    But there’s no ignoring that the fact that McKinley’s role has become increasingly intertwined with the world of AI.

    Ahead of the EU AI Act getting the greenlight this week, TechCrunch caught up with McKinley in London.

    GitHub Chief Legal Officer Shelley McKinley Image Credits: GitHub

    Two worlds collide

    For the unfamiliar, GitHub is a platform that enables collaborative software development, allowing users to host, manage, and share code “repositories” (a location where project-specific files are kept) with anyone, anywhere in the world. Companies can pay to make their repositories private for internal projects, but GitHub’s success and scale has been driven by open source software development carried out collaboratively in a public setting.

    In the six years since the Microsoft acquisition, much has changed in the technological landscape. AI wasn’t exactly novel in 2018, and its growing impact was becoming more evident across society — but with the advent of ChatGPT, DALL-E, and the rest, AI has arrived firmly in the mainstream consciousness.

    “I would say that AI is taking up [a lot of] my time — that includes things like ‘how do we develop and ship AI products,’ and ‘how do we engage in the AI discussions that are going on from a policy perspective?,’ as well as ‘how do we think about AI as it comes onto our platform?’,” McKinley said.

    The advance of AI has also been heavily dependent on open source, with collaboration and shared data pivotal to some of the most preeminent AI systems today — this is perhaps best exemplified by the generative AI poster child OpenAI, which began with a strong open-source foundation before abandoning those roots for a more proprietary play (this pivot is also one of the reasons Elon Musk is currently suing OpenAI).

    As well-meaning as Europe’s incoming AI regulations might be, critics argued that they would have significant unintended consequences for the open source community, which in turn could hamper the progress of AI. This argument has been central to GitHub’s lobbying efforts.

    “Regulators, policymakers, lawyers… are not technologists,” McKinley said. “And one of the most important things that I’ve personally been involved with over the past year, is going out and helping to educate people on how the products work. People just need a better understanding of what’s going on, so that they can think about these issues and come to the right conclusions in terms of how to implement regulation.”

    At the heart of the concerns was that the regulations would create legal liability for open source “general purpose AI systems,” which are built on models capable of handling a multitude of different tasks. If open source AI developers were to be held liable for issues arising further down-stream (i.e. at the application level), they might be less inclined to contribute — and in the process, more power and control would be bestowed upon the big tech firms developing proprietary systems.

    Open source software development by its very nature is distributed, and GitHub — with its 100 million-plus developers globally — needs developers to be incentivized to continue contributing to what many tout as the fourth industrial revolution. And this is why GitHub has been so vociferous about the AI Act, lobbying for exemptions for developers working on open source general purpose AI technology.

    “GitHub is the home for open source, we are the steward of the world’s largest open source community,” McKinley said. “We want to be the home for all developers, we want to accelerate human progress through developer collaboration. And so for us, it’s mission critical — it’s not just a ‘fun to have’ or ‘nice to have’ — it’s core to what we do as a company as a platform.”

    As things transpired, the text of the AI Act now includes some exemptions for AI models and systems released under free and open-source licenses — though a notable exception includes where “unacceptable” high-risk AI systems are at play. So in effect, developers behind open source general purpose AI models don’t have to provide the same level of documentation and guarantees to EU regulators — though it’s not yet clear which proprietary and open-source models will fall under its “high-risk” categorization.

    But those intricacies aside, McKinley reckons that their hard lobbying work has mostly paid off, with regulators placing less focus on software “componentry” (the individual elements of a system that open-source developers are more likely to create), and more on what’s happening at the compiled application level.

    “That is a direct result of the work that we’ve been doing to help educate policymakers on these topics,” McKinley said. “What we’ve been able to help people understand is the componentry aspect of it — there’s open source components being developed all the time, that are being put out for free and that [already] have a lot of transparency around them — as do the open source AI models. But how do we think about responsibly allocating the liability? That’s really not on the upstream developers, it’s just really downstream commercial products. So I think that’s a really big win for innovation, and a big win for open source developers.”

    Enter Copilot

    With the rollout of its AI-enabled pair-programming tool Copilot three years back, GitHub set the stage for a generative AI revolution that looks set to upend just about every industry, including software development. Copilot suggests lines or functions as the software developer types, a little like how Gmail’s Smart Compose speeds up email writing by suggesting the next chunk of text in a message.

    However, Copilot has upset a substantial segment of the developer community, including those at the not-for-profit Software Freedom Conservancy, who called for all open source software developers to ditch GitHub in the wake of Copilot’s commercial launch in 2022. The problem? Copilot is a proprietary, paid-for service that capitalizes on the hard work of the open source community. Moreover, Copilot was developed in cahoots with OpenAI (before the ChatGPT craze), leaning substantively on OpenAI Codex, which itself was trained on a massive amount of public source code and natural language models.

    GitHub Copilot

    GitHub Copilot Image Credits: GitHub

    Copilot ultimately raises key questions around who authored a piece of software — if it’s merely regurgitating code written by another developer, then shouldn’t that developer get credit for it? Software Freedom Conservancy’s Bradley M. Kuhn wrote a substantial piece precisely on that matter, called: “If Software is My Copilot, Who Programmed My Software?

    There’s a misconception that “open source” software is a free-for-all — that anyone can simply take code produced under an open source license and do as they please with it. But while different open source licenses have different restrictions, they all pretty much have one notable stipulation: developers reappropriating code written by someone else need to include the correct attribution. It’s difficult to do that if you don’t know who (if anyone) wrote the code that Copilot is serving you.

    The Copilot kerfuffle also highlights some of the difficulties in simply understanding what generative AI is. Large language models, such as those used in tools such as ChatGPT or Copilot, are trained on vast swathes of data — much like a human software developer learns to do something by poring over previous code, Copilot is always likely to produce output that is similar (or even identical) to what has been produced elsewhere. In other words, whenever it does match public code, the match “frequently” applies to “dozens, if not hundreds” of repositories.

    “This is generative AI, it’s not a copy-and-paste machine,” McKinley said. “The one time that Copilot might output code that matches publicly available code, generally, is if it’s a very, very common way of doing something. That said, we hear that people have concerns about these things — we’re trying to take a responsible approach, to ensure that we’re meeting the needs of our community in terms of developers [that] are really excited about this tool. But we’re listening to developers feedback too.”

    At the tail end of 2022, with several U.S. software developers sued the company alleging that Copilot violates copyright law, calling it “unprecedented open-source soft­ware piracy.” In the intervening months, Microsoft, GitHub, and OpenAI managed to get various facets of the case thrown out, but the lawsuit rolls on, with the plaintiffs recently filing an amended complaint around GitHub’s alleged breach-of-contract with its developers.

    The legal skirmish wasn’t exactly a surprise, as McKinley notes. “We definitely heard from the community — we all saw the things that were out there, in terms of concerns were raised,” McKinley said.

    With that in mind, GitHub made some efforts to allay concerns over the way Copilot might “borrow” code generated by other developers. For instance, it introduced a “duplication detection” feature. It’s turned off by default, but once activated, Copilot will block code completion suggestions of more than 150 characters that match publicly available code. And last August, GitHub debuted a new code-referencing feature (still in beta), which allows developers to follow the breadcrumbs and see where a suggested code snippet comes from — armed with this information, they can follow the letter of the law as it pertains to licensing requirements and attribution, and even use the entire library which the code snippet was appropriated from.

    GitHub Code Match

    Copilot Code Match Image Credits: GitHub

    But it’s difficult to assess the scale of the problem that developers have voiced concerns about — GitHub has previously said that its duplication detection feature would trigger “less than 1%” of the time when activated. Even then, it’s usually when there is a near-empty file with little local context to run with — so in those cases, it is more likely to make a suggestion that matches code written elsewhere.

    “There are a lot of opinions out there — there are more than 100 million developers on our platform,” McKinley said. “And there are a lot of opinions between all of the developers, in terms of what they’re concerned about. So we are trying to react to feedback to the community, proactively take measures that we think help make Copilot a great product and experience for developers.”

    What next?

    The EU AI Act progressing is just the beginning — we now know that it’s definitely happening, and in what form. But it will still be at least another couple of years before companies have to comply with it — similar to how companies had to prepare for GDPR in the data privacy realm.

    “I think [technical] standards are going to play a big role in all of this,” McKinley said. “We need to think about how we can get harmonised standards that companies can then comply with. Using GDPR as an example, there are all kinds of different privacy standards that people designed to harmonise that. And we know that as the AI Act goes to implementation, there will be different interests, all trying to figure out how to implement it. So we want to make sure that we’re giving a voice to developers and open source developers in those discussions.”

    On top of that, more regulations are on the horizon. President Biden recently issued an executive order with a view toward setting standards around AI safety and security, which gives a glimpse into how Europe and the U.S. might ultimately differ as it pertains to regulation — even if they do share a similar “risk-based” approach.

    “I would say the EU AI Act is a ‘fundamental rights base,’ as you would expect in Europe,” McKinley said. “And the U.S. side is very cybersecurity, deep-fakes — that kind of lens. But in many ways, they come together to focus on what are risky scenarios — and I think taking a risk-based approach is something that we are in favour of — it’s the right way to think about it.”

    [ad_2]

    Paul Sawers

    Source link

  • Nintendo Asks Valve To Kick GameCube And Wii Emulator Off Steam, Says It’s Protecting Its Creativity And Work

    Nintendo Asks Valve To Kick GameCube And Wii Emulator Off Steam, Says It’s Protecting Its Creativity And Work

    [ad_1]

    Valve removed the Steam listing for Dolphin, a popular emulator for the GameCube and Wii, after it received a cease and desist from Nintendo, developers behind the project claim. The company behind Mario and Zelda accuses the emulator of illegally circumventing its protections, and says it’s merely protecting the “hard work and creativity of video game engineers and developers.”

    A listing for Dolphin on Valve’s digital storefront first appeared back in March. “We are pleased to announce our great experiment—Dolphin is coming to Steam!” the creators wrote at the time. While the open-source project has been available online for years, interest in retro emulators has increased since the release of the Steam Deck, and an official store page would make the tool even easier to access.

    On May 27, however, Dolphin’s developers announced the Steam port would be “indefinitely postponed” after Valve removed the listing following discussions with Nintendo. “It is with much disappointment that we have to announce that the Dolphin on Steam release has been indefinitely postponed,” the emulator team wrote in an update on the project’s blog. “We were notified by Valve that Nintendo has issued a cease and desist citing the DMCA against Dolphin’s Steam page, and have removed Dolphin from Steam until the matter is settled. We are currently investigating our options and will have a more in-depth response in the near future.”

    According to a copy of the legal notice reviewed by PC Gamer, Nintendo accuses Dolphin of using “cryptographic keys without Nintendo’s authorization and decrypting the ROMs at or immediately before runtime.” While emulation is itself legal, providing users with ways to bypass protections on individual game ROMs could potentially violate Nintendo’s intellectual property rights. It’s an issue that would have to be hashed out in court, though the power imbalance between large corporations and homebrew projects like Dolphin means that rarely actually occurs.

    “Nintendo is committed to protecting the hard work and creativity of video game engineers and developers,” a spokesperson for Nintendo told Kotaku in an email. “This emulator illegally circumvents Nintendo’s protection measures and runs illegal copies of games. Using illegal emulators or illegal copies of games harms development and ultimately stifles innovation. Nintendo respects the intellectual property rights of other companies, and in turn expects others to do the same.”

    While the company has rarely looked the other way when it comes to piracy of its games and the tools that could facilitate it (like mod chips sold online), Nintendo has been particularly aggressive lately in clamping down on leaks and what it believes to be illegal misuses of its games and technology. In February it subpoenaed Discord for the personal information of someone suspected of leaking the official The Legend of Zelda: Tears of the Kingdom art book. In April it issued multiple copyright strikes against dozens of popular Breath of the Wild gameplay videos on YouTube that relied on modded versions of the game. And in May it seemingly had a Switch emulation tool, Lotpick, removed from Github after illicit copies of Tears of the Kingdom began spreading like wildfire online prior to the game’s official release.

    It’s not yet clear how Dolphin’s current developers will respond, or how willing Valve will be to bring the store page back unless the matter is resolved in court, which could take years. Last year, Valve accidentally included the Switch emulator Yuzu in its YouTube trailer for the Steam Deck. The video was later edited and re-uploaded to remove the reference. The company did not immediately respond to a request for comment.

    [ad_2]

    Ethan Gach

    Source link