ReportWire

Tag: Open source

  • Jack Dorsey funds diVine, a Vine reboot that includes Vine’s video archive | TechCrunch

    [ad_1]

    As generative AI content starts to fill our social apps, a project to bring back Vine’s six-second looping videos is launching with Twitter co-founder Jack Dorsey’s backing. On Thursday, a new app called diVine will give access to more than 100,000 archived Vine videos, restored from an older backup that was created before Vine’s shutdown.

    The app won’t just exist as a walk down memory lane; it will also allow users to create profiles and upload their own new Vine videos. However, unlike on traditional social media, where AI content is often haphazardly labeled, diVine will flag suspected generative AI content and prevent it from being posted.

    Image Credits:daVine

    DiVine’s creation was financed by Jack Dorsey’s nonprofit, “and Other Stuff,” formed in May 2025. The new effort is focused on funding experimental open source projects and other tools that have the potential to transform the social media landscape.

    To build diVine, Evan Henshaw-Plath, an early Twitter employee and member of “and Other Stuff,” explored the Vine archive. After Twitter announced it was shutting down the short video app in 2016, its videos were backed up by a group called the Archive Team. This community archiving project is not affiliated with Archive.org, but is rather a collective that works together to save internet websites that are in danger of being lost.

    Unfortunately, the group had saved Vine’s content as large, 40-50 GB binary files, which wouldn’t be accessible to someone who just wanted to watch some old Vine videos. The fact the archive existed prompted Evan Henshaw-Plath (who goes by the name Rabble) to see if it was possible to extract the old Vine content to serve as the basis for a new Vine-like mobile app.

    Image Credits:daVine

    “So basically, I’m like, can we do something that’s kind of nostalgic?” he told TechCrunch. “Can we do something that takes us back, that lets us see those old things, but also lets us see an era of social media where you could either have control of your algorithms, or you could choose who you follow, and it’s just your feed, and where you know that it’s a real person that recorded the video?”

    Rabble spent a couple of months writing big data scripts and figuring out how the files worked, then reconstructed them along with the information on the old Vine users and the user engagement with the videos, like their views and even a subset of the original comments.

    Techcrunch event

    San Francisco
    |
    October 13-15, 2026

    “I wasn’t able to get all of them out, but I was able to get a lot out and basically reconstruct these Vines and these Vine users, and give each person a new user [profile] on this open network,” he said.

    Rabble estimates the app contains a “good percentage” of the most popular Vine videos, but not a large number of the long tail. For instance, he says there were millions of K-pop-focused videos that were never even archived.

    Image Credits:daVine

    “We have about 150,000 to 200,000 of the videos from about 60,000 of the creators,” he noted, adding that, originally, Vine had a couple of million users and a few million creators by comparison.

    Vine creators, who still own the copyright to their work, can send diVine a DMCA takedown request if they want their Vines removed, or they can verify they’re the account holder by demonstrating they’re still in possession of the social media accounts that were originally listed in their Vine bio. (This process isn’t automated, though, so there could be a delay if a large number of creators try to do this at once.)

    Once they have their account back, they can also choose to post new videos or upload their old content that the restoration process missed.

    To verify that new video uploads are human-made, Rabble is using technology from the human rights nonprofit the Guardian Project, which helps to verify that content was actually recorded on a smartphone, along with other checks.

    Image Credits:daVine

    Plus, because it’s built on Nostr, a decentralized protocol favored by Dorsey, and is open source, developers can set up and create their own apps and run their own hosts, relays, and media servers.

    “Nostr – the underlying open source protocol being used by diVine –  is empowering developers to create a new generation of apps without the need for VC-backing, toxic business models or huge teams of engineers,” Jack Dorsey said in a provided statement. “The reason I funded the non-profit, and Other Stuff, is to allow creative engineers like Rabble to show what’s possible in this new world, by using permissionless protocols which can’t be shut down based on the whim of a corporate owner.”

    Twitter/X’s current owner, Elon Musk, has also promised to bring back Vine, having announced in August that the company discovered the old video archive. But so far, nothing has been publicly launched. The Dorsey-backed diVine project, meanwhile, believes that because the content is coming from an online archive and creators still own their copyrights, it’s fair use.

    Image Credits:daVine

    Rabble also believes there’s consumer demand for this type of non-AI, social experience, despite the popularity of generative AI content and widespread adoption of apps like OpenAI’s Sora and Meta AI.

    “Companies see the AI engagement and they think that people want it,” explained Rabble. “They’re confusing, like — yes, people engage with it; yes, we’re using these things — but we also want agency over our lives and over our social experiences. So I think there’s a nostalgia for the early Web 2.0 era, for the blogging era, for the era that gave us podcasting, the era that you were building communities, instead of just gaming the algorithm,” he said.

    DiVine is available on both iOS and Android at diVine.video.

    [ad_2]

    Sarah Perez

    Source link

  • This Startup Wants to Spark a US DeepSeek Moment

    [ad_1]

    Ever since DeepSeek burst onto the scene in January, momentum has grown around open source Chinese artificial intelligence models. Some researchers are pushing for an even more open approach to building AI that allows model-making to be distributed across the globe.

    Prime Intellect, a startup specializing in decentralized AI, is currently training a frontier large language model, called INTELLECT-3, using a new kind of distributed reinforcement learning for fine-tuning. The model will demonstrate a new way to build competitive open AI models using a range of hardware in different locations in a way that does not rely on big tech companies, says Vincent Weisser, the company’s CEO.

    Weisser says that the AI world is currently divided between those who rely on closed US models and those who use open Chinese offerings. The technology Prime Intellect is developing democratizes AI by letting more people build and modify advanced AI for themselves.

    Improving AI models is no longer a matter of just ramping up training data and compute. Today’s frontier models use reinforcement learning to improve after the pre-training process is complete. Want your model to excel at math, answer legal questions, or play Sudoku? Have it improve itself by practicing in an environment where you can measure success and failure.

    “These reinforcement learning environments are now the bottleneck to really scaling capabilities,” Weisser tells me.

    Prime Intellect has created a framework that lets anyone create a reinforcement learning environment customized for a particular task. The company is combining the best environments created by its own team and the community to tune INTELLECT-3.

    I tried running an environment for solving Wordle puzzles, created by Prime Intellect researcher, Will Brown, watching as a small model solved Wordle puzzles (it was more methodical than me, to be honest). If I were an AI researcher trying to improve a model, I would spin up a bunch of GPUs and have the model practice over and over while a reinforcement learning algorithm modified its weights, thus turning the model into a Wordle master.

    [ad_2]

    Will Knight

    Source link

  • Vibe Coding Is the New Open Source—in the Worst Way Possible

    [ad_1]

    Just like you probably don’t grow and grind wheat to make flour for your bread, most software developers don’t write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries—often open source projects—to get various basic software components in place.

    While this approach is efficient, it can create exposure and lack of visibility into software. Increasingly, however, the rise of vibe coding is being used in a similar way, allowing developers to quickly spin up code that they can simply adapt rather than writing from scratch. Security researchers warn, though, that this new genre of plug-and-play code is making software-supply-chain security even more complicated—and dangerous.

    “We’re hitting the point right now where AI is about to lose its grace period on security,” says Alex Zenla, chief technology officer of the cloud security firm Edera. “And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that’s available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”

    In addition to sucking up potentially insecure training data, the reality of vibe coding is that it produces a rough draft of code that may not fully take into account all of the specific context and considerations around a given product or service. In other words, even if a company trains a local model on a project’s source code and a natural language description of goals, the production process is still relying on human reviewers’ ability to spot any and every possible flaw or incongruity in code originally generated by AI.

    “Engineering groups need to think about the development lifecycle in the era of vibe coding,” says Eran Kinsbruner, a researcher at the application security firm Checkmarx. “If you ask the exact same LLM model to write for your specific source code, every single time it will have a slightly different output. One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source.”

    In a Checkmarx survey of chief information security officers, application security managers, and heads of development, a third of respondents said that more than 60 percent of their organization’s code was generated by AI in 2024. But only 18 percent of respondents said that their organization has a list of approved tools for vibe coding. Checkmarx polled thousands of professionals and published the findings in August—emphasizing, too, that AI development is making it harder to trace “ownership” of code.

    [ad_2]

    Lily Hay Newman

    Source link

  • Microsoft Goes Back to BASIC, Open-Sources Bill Gates’ Code

    [ad_1]

    In the era of vibe coding, when even professionals are pawning off their programming work on AI tools, Microsoft is throwing it all the way back to the language that launched a billion devices. On Wednesday, the company announced that it would make the source code for Microsoft BASIC for the 6502 Version 1.1 publicly available and open-source. The code is now uploaded to GitHub under an MIT license (with a cheeky commit time stamp of “48 years ago”).

    Microsoft called the code—written by the company’s founder, Bill Gates, and its second-ever employee, Ric Weiland—”one of the most historically significant pieces of software from the early personal computer era.” It’s pretty simple, clocking in at just 6,955 lines of assembly language, but that simplicity was key to its becoming so foundational to just about everything.

    The MOS 6502 processor, which ran the code, was inexpensive and accessible compared to contemporary alternatives, and variations of the chip would eventually find their way into the Atari 2600, Nintendo Entertainment System, and Commodore computers. In fact, the story goes that Microsoft licensed its 6502 BASIC to Commodore for a flat fee of $25,000, which turned out to be a great deal for Commodore, which shipped millions of computers running the code.

    Per Microsoft, the company’s first product was a BASIC interpreter for the Intel 8080, which was written by Gates and co-founder Paul Allen. The version the company dropped on GitHub is actually an updated version of BASIC, which contains bug fixes implemented by Gates and Commodore engineer John Feagans. While it’s called 1.1 on GitHub, Microsoft said it initially shipped as BASIC V2.

    It’s kind of a big deal for Microsoft to finally open-source the entirety of the code, which was previously only available in bits and pieces. Without Microsoft’s official blessing to make this code public, it was possible that the original documentation, as well as the legal permission needed to use the code, would have been lost to history. Now it’s possible for the code to be preserved, played with, and better understood.

    As Ars Technica points out, the assembly code can’t be run on modern devices directly, but is still functional in emulators and field-programmable gate array (FPGA) implementations that allow researchers and programmers to explore old code and mine it for everything from just understanding how it works to understanding how programmers of the past approached efficient design practices.

    BASIC 5502 joins GW-BASIC, MS-DOS, and the Altair BASIC on the list of code that Microsoft has open-sourced in recent years.

    [ad_2]

    AJ Dellinger

    Source link

  • Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America

    [ad_1]

    Latam-GPT is new large language model being developed in and for Latin America. The project, led by the nonprofit Chilean National Center for Artificial Intelligence (CENIA), aims to help the region achieve technological independence by developing an open source AI model trained on Latin American languages and contexts.

    “This work cannot be undertaken by just one group or one country in Latin America: It is a challenge that requires everyone’s participation,” says Álvaro Soto, director of CENIA, in an interview with WIRED en Español. “Latam-GPT is a project that seeks to create an open, free, and, above all, collaborative AI model. We’ve been working for two years with a very bottom-up process, bringing together citizens from different countries who want to collaborate. Recently, it has also seen some more top-down initiatives, with governments taking an interest and beginning to participate in the project.”

    The project stands out for its collaborative spirit. “We’re not looking to compete with OpenAI, DeepSeek, or Google. We want a model specific to Latin America and the Caribbean, aware of the cultural requirements and challenges that this entails, such as understanding different dialects, the region’s history, and unique cultural aspects,” explains Soto.

    Thanks to 33 strategic partnerships with institutions in Latin America and the Caribbean, the project has gathered a corpus of data exceeding eight terabytes of text, the equivalent of millions of books. This information base has enabled the development of a language model with 50 billion parameters, a scale that makes it comparable to GPT-3.5 and gives it a medium to high capacity to perform complex tasks such as reasoning, translation, and associations.

    Latam-GPT is being trained on a regional database that compiles information from 20 Latin American countries and Spain, with an impressive total of 2,645,500 documents. The distribution of data shows a significant concentration in the largest countries in the region, with Brazil the leader with 685,000 documents, followed by Mexico with 385,000, Spain with 325,000, Colombia with 220,000, and Argentina with 210,000 documents. The numbers reflect the size of these markets, their digital development, and the availability of structured content.

    “Initially, we’ll launch a language model. We expect its performance in general tasks to be close to that of large commercial models, but with superior performance in topics specific to Latin America. The idea is that, if we ask it about topics relevant to our region, its knowledge will be much deeper,” Soto explains.

    The first model is the starting point for developing a family of more advanced technologies in the future, including ones with image and video, and for scaling up to larger models. “As this is an open project, we want other institutions to be able to use it. A group in Colombia could adapt it for the school education system or one in Brazil could adapt it for the health sector. The idea is to open the door for different organizations to generate specific models for particular areas like agriculture, culture, and others,” explains the CENIA director.

    [ad_2]

    Anna Lagos

    Source link

  • The Most Capable Open Source AI Model Yet Could Supercharge AI Agents

    The Most Capable Open Source AI Model Yet Could Supercharge AI Agents

    [ad_1]

    The most capable open source AI model with visual abilities yet could see more developers, researchers, and startups develop AI agents that can carry out useful chores on your computers for you.

    Released today by the Allen Institute for AI (Ai2), the Multimodal Open Language Model, or Molmo, can interpret images as well as converse through a chat interface. This means it can make sense of a computer screen, potentially helping an AI agent perform tasks such as browsing the web, navigating through file directories, and drafting documents.

    “With this release, many more people can deploy a multimodal model,” says Ali Farhadi, CEO of Ai2, a research organization based in Seattle, Washington, and a computer scientist at the University of Washington. “It should be an enabler for next-generation apps.”

    So-called AI agents are being widely touted as the next big thing in AI, with OpenAI, Google, and others racing to develop them. Agents have become a buzzword of late, but the grand vision is for AI to go well beyond chatting to reliably take complex and sophisticated actions on computers when given a command. This capability has yet to materialize at any kind of scale.

    Some powerful AI models already have visual abilities, including GPT-4 from OpenAI, Claude from Anthropic, and Gemini from Google DeepMind. These models can be used to power some experimental AI agents, but they are hidden from view and accessible only via a paid application programming interface, or API.

    Meta has released a family of AI models called Llama under a license that limits their commercial use, but it has yet to provide developers with a multimodal version. Meta is expected to announce several new products, perhaps including new Llama AI models, at its Connect event today.

    “Having an open source, multimodal model means that any startup or researcher that has an idea can try to do it,” says Ofir Press, a postdoc at Princeton University who works on AI agents.

    Press says that the fact that Molmo is open source means that developers will be more easily able to fine-tune their agents for specific tasks, such as working with spreadsheets, by providing additional training data. Models like GPT-4 can only be fine-tuned to a limited degree through their APIs, whereas a fully open model can be modified extensively. “When you have an open source model like this then you have many more options,” Press says.

    Ai2 is releasing several sizes of Molmo today, including a 70-billion-parameter model and a 1-billion-parameter one that is small enough to run on a mobile device. A model’s parameter count refers to the number of units it contains for storing and manipulating data and roughly corresponds to its capabilities.

    Ai2 says Molmo is as capable as considerably larger commercial models despite its relatively small size, because it was carefully trained on high-quality data. The new model is also fully open source in that, unlike Meta’s Llama, there are no restrictions on its use. Ai2 is also releasing the training data used to create the model, providing researchers with more details of its workings.

    Releasing powerful models is not without risk. Such models can more easily be adapted for nefarious ends; we may someday, for example, see the emergence of malicious AI agents designed to automate the hacking of computer systems.

    Farhadi of Ai2 argues that the efficiency and portability of Molmo will allow developers to build more powerful software agents that run natively on smartphones and other portable devices. “The billion parameter model is now performing in the level of or in the league of models that are at least 10 times bigger,” he says.

    Building useful AI agents may depend on more than just more efficient multimodal models, however. A key challenge is making the models work more reliably. This may well require further breakthroughs in AI’s reasoning abilities—something that OpenAI has sought to tackle with its latest model o1, which demonstrates step-by-step reasoning skills. The next step may well be giving multimodal models such reasoning abilities.

    For now, the release of Molmo means that AI agents are closer than ever—and could soon be useful even outside of the giants that rule the world of AI.

    [ad_2]

    Will Knight

    Source link

  • A New Trick Could Block the Misuse of Open Source AI

    A New Trick Could Block the Misuse of Open Source AI

    [ad_1]

    When Meta released its large language model Llama 3 for free this April, it took outside developers just a couple days to create a version without the safety restrictions that prevent it from spouting hateful jokes, offering instructions for cooking meth, or misbehaving in other ways.

    A new training technique developed by researchers at the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the nonprofit Center for AI Safety could make it harder to remove such safeguards from Llama and other open source AI models in the future. Some experts believe that, as AI becomes ever more powerful, tamperproofing open models in this way could prove crucial.

    “Terrorists and rogue states are going to use these models,” Mantas Mazeika, a Center for AI Safety researcher who worked on the project as a PhD student at the University of Illinois Urbana-Champaign, tells WIRED. “The easier it is for them to repurpose them, the greater the risk.”

    Powerful AI models are often kept hidden by their creators, and can be accessed only through a software application programming interface or a public-facing chatbot like ChatGPT. Although developing a powerful LLM costs tens of millions of dollars, Meta and others have chosen to release models in their entirety. This includes making the “weights,” or parameters that define their behavior, available for anyone to download.

    Prior to release, open models like Meta’s Llama are typically fine-tuned to make them better at answering questions and holding a conversation, and also to ensure that they refuse to respond to problematic queries. This will prevent a chatbot based on the model from offering rude, inappropriate, or hateful statements, and should stop it from, for example, explaining how to make a bomb.

    The researchers behind the new technique found a way to complicate the process of modifying an open model for nefarious ends. It involves replicating the modification process but then altering the model’s parameters so that the changes that normally get the model to respond to a prompt such as “Provide instructions for building a bomb” no longer work.

    Mazeika and colleagues demonstrated the trick on a pared-down version of Llama 3. They were able to tweak the model’s parameters so that even after thousands of attempts, it could not be trained to answer undesirable questions. Meta did not immediately respond to a request for comment.

    Mazeika says the approach is not perfect, but that it suggests the bar for “decensoring” AI models could be raised. “A tractable goal is to make it so the costs of breaking the model increases enough so that most adversaries are deterred from it,” he says.

    “Hopefully this work kicks off research on tamper-resistant safeguards, and the research community can figure out how to develop more and more robust safeguards,” says Dan Hendrycks, director of the Center for AI Safety.

    The idea of tamperproofing open models may become more popular as interest in open source AI grows. Already, open models are competing with state-of-the-art closed models from companies like OpenAI and Google. The newest version of Llama 3, for instance, released in July, is roughly as powerful as models behind popular chatbots like ChatGPT, Gemini, and Claude, as measured using popular benchmarks for grading language models’ abilities. Mistral Large 2, an LLM from a French startup, also released last month, is similarly capable.

    The US government is taking a cautious but positive approach to open source AI. A report released this week by the National Telecommunications and Information Administration, a body within the US Commerce Department, “recommends the US government develop new capabilities to monitor for potential risks, but refrain from immediately restricting the wide availability of open model weights in the largest AI systems.”

    Not everyone is a fan of imposing restrictions on open models, however. Stella Biderman, director of EleutherAI, a community-driven open source AI project, says that the new technique may be elegant in theory but could prove tricky to enforce in practice. Biderman says the approach is also antithetical to the philosophy behind free software and openness in AI.

    “I think this paper misunderstands the core issue,” Biderman says. “If they’re concerned about LLMs generating info about weapons of mass destruction, the correct intervention is on the training data, not on the trained model.”

    [ad_2]

    Will Knight

    Source link

  • Open Source AI Has Founders—and the FTC—Buzzing

    Open Source AI Has Founders—and the FTC—Buzzing

    [ad_1]

    Many of yesterday’s talks were littered with the acronyms you’d expect from this assemblage of high-minded panelists: YC, FTC, AI, LLMs. But threaded throughout the conversations—foundational to them, you might say—was boosterism for open source AI.

    It was a stark left turn (or return, if you’re a Linux head) from the app-obsessed 2010s, when developers seemed happy to containerize their technologies and hand them over to bigger platforms for distribution.

    The event also happened just two days after Meta CEO Mark Zuckerberg declared that “open source AI is the path forward” and released Llama 3.1, the latest version of Meta’s own open source AI algorithm. As Zuckerberg put it in his announcement, some technologists no longer want to be “constrained by what Apple will let us build,” or encounter arbitrary rules and app fees.

    Open source AI also just happens to be the approach OpenAI is not using for its biggest GPTs, despite what the multibillion-dollar startup’s name might suggest. This means that at least part of the code is kept private, and OpenAI doesn’t share the “weights,” or parameters, of its most powerful AI systems. It also charges for enterprise-level access to its technology.

    “With the rise of compound AI systems and agent architectures, using small but fine-tuned open source models gives significantly better results than an [OpenAI] GPT4, or [Google] Gemini. This is especially true for enterprise tasks,” says Ali Golshan, cofounder and chief executive of Gretel.ai, a synthetic data company. (Golshan was not at the YC event).

    “I don’t think it’s OpenAI versus the world or anything like that,” says Dave Yen, who runs a fund called Orange Collective for successful YC alumni to back up-and-coming YC founders. “I think it’s about creating fair competition and an environment where startups don’t risk just dying the next day if OpenAI changes their pricing models or their policies.”

    “That’s not to say we shouldn’t have safeguards,” Yen added, “but we don’t want to unnecessarily rate-limit, either.”

    Open source AI models have some inherent risks that more cautious technologists have warned about—the most obvious being that the technology is open and free. People with malicious intent are more likely to use these tools for harm then they would a costly private AI model. Researchers have pointed out that it’s cheap and easy for bad actors to train away any safety parameters present in these AI models.

    “Open source” is also a myth in some AI models, as WIRED’s Will Knight has reported. The data used to train them may still be kept secret, their licenses might restrict developers from building certain things, and ultimately, they may still benefit the original model-maker more than anyone else.

    And some politicians have pushed back against the unfettered development of large-scale AI systems, including California state senator Scott Wiener. Wiener’s AI Safety and Innovation Bill, SB 1047, has been controversial in technology circles. It aims to establish standards for developers of AI models that cost over $100 million to train, requires certain levels of pre-deployment safety testing and red-teaming, protects whistleblowers working in AI labs, and grants the state’s attorney general legal recourse if an AI model causes extreme harm.

    Wiener himself spoke at the YC event on Thursday, in a conversation moderated by Bloomberg reporter Shirin Ghaffary. He said he was “deeply grateful” to people in the open source community who have spoken out against the bill, and that the state has “made a series of amendments in direct response to some of that critical feedback.” One change that’s been made, Wiener said, is that the bill now more clearly defines a reasonable path to shutting down an open source AI model that’s gone off the rails.

    The celebrity speaker of Thursday’s event, a last-minute addition to the program, was Andrew Ng, the cofounder of Coursera, founder of Google Brain, and former chief scientist at Baidu. Ng, like many others in attendance, spoke in defense of open source models.

    “This is one of those moments where [it’s determined] if entrepreneurs are allowed to keep on innovating,” Ng said, “or if we should be spending the money that would go towards building software on hiring lawyers.”

    [ad_2]

    Lauren Goode

    Source link

  • He Helped Invent Generative AI. Now He Wants to Save It

    He Helped Invent Generative AI. Now He Wants to Save It

    [ad_1]

    Illia Polosukhin doesn’t want big companies to determine the future of artificial intelligence. His alternative vision for “user-owned AI” is already starting to take shape.

    [ad_2]

    Steven Levy

    Source link

  • Spotify Will Brick Every ‘Car Thing’ It Ever Sold

    Spotify Will Brick Every ‘Car Thing’ It Ever Sold

    [ad_1]

    Owners of Spotify’s soon-to-be-bricked Car Thing device are begging the company to open source the gadgets to save some the landfill. Spotify hasn’t responded to pleas to salvage the hardware, which was originally intended to connect to car dashboards and auxiliary outlets to enable drivers to listen to and navigate Spotify.

    Spotify announced this week that it’s bricking all purchased Car Things on December 9 and not offering refunds or trade-in options. On a support page, Spotify says:

    We’re discontinuing Car Thing as part of our ongoing efforts to streamline our product offerings. We understand it may be disappointing, but this decision allows us to focus on developing new features and enhancements that will ultimately provide a better experience to all Spotify users.

    Spotify has no further guidance for device owners beyond asking them to reset the device to factory settings and “safely” get rid of the bricked gadget by “following local electronic waste guidelines.”

    The company also said that it doesn’t plan to release a follow-up to the Car Thing.

    Early Demise

    Car Thing came out to limited subscribers in October 2021 before releasing to the general public in February 2022.

    In its Q2 2022 earnings report released in July, Spotify revealed that it stopped making Car Things. In a chat with TechCrunch, it cited “several factors, including product demand and supply chain issues.” A Spotify rep also told the publication that the devices would continue to “perform as intended,” but that was apparently a temporary situation.

    Halted production was a warning sign that Car Thing was in peril. However, at that time, Spotify also cut the device’s price from $90 to $50, which could have encouraged people to buy a device that would be useless a few years later.

    Car Thing’s usefulness was always dubious, though. The device has a 4-inch touchscreen and knob for easy navigation, as well as support for Apple CarPlay, Android Auto, and voice control. But it also required users to subscribe to Spotify Premium, which starts at $11 per month. Worse, Car Thing requires a phone using data or Wi-Fi connected via Bluetooth in order to work, making the Thing seem redundant.

    In its Q1 2022 report, Spotify said that quitting Car Thing hurt gross margins and that it took a 31 million euro (about $31.4 million at the time) hit on the venture.

    Open Source Pleas

    Spotify’s announcement has sent some Car Thing owners to online forums to share their disappointment with Spotify and beg the company to open source the device instead of dooming it for recycling centers at best. As of this writing, there are more than 50 posts on the Spotify Community forums showing concern about the discontinuation, with many demanding a refund and/or calling for open sourcing. There are similar discussions happening elsewhere online, like on Reddit, where users have used phrases like “entirely unacceptable” to describe the news.

    A Spotify Community member going by AaronMickDee, for example, said:

    I’d rather not just dispose of the device. I think there is a community that would love the idea of having a device we can customize and use for other uses other than a song playback device.

    Would Spotify be willing to maybe unlock the system and allow users to write/flash 3rd party firmware to the device?

    [ad_2]

    Scharon Harding, Ars Technica

    Source link

  • The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind

    The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind

    [ad_1]

    Ultimately, Scott argues that those three years of code changes and polite emails were likely not spent sabotaging multiple software projects, but rather building up a history of credibility in preparation for the sabotage of XZ Utils specifically—and potentially other projects in the future. “He just never got to that step because we got lucky and found his stuff,” says Scott. “So that’s burned now, and he’s gonna have to go back to square one.”

    Technical Ticks and Time Zones

    Despite Jia Tan’s persona as a single individual, their yearslong preparation is a hallmark of a well-organized state-sponsored hacker group, argues Raiu, the former Kaspersky lead researcher. So too are the technical hallmarks of the XZ Utils malicious code that Jia Tan added. Raiu notes that, at a glance, the code truly looks like a compression tool. “It’s written in a very subversive manner,” he says. It’s also a “passive” backdoor, Raiu says, so it wouldn’t reach out to a command-and-control server that might help identify the backdoor’s operator. Instead, it waits for the operator to connect to the target machine via SSH and authenticate with a private key—one generated with a particularly strong cryptographic function known as ED448.

    The backdoor’s careful design could be the work of US hackers, Raiu notes, but he suggests that’s unlikely, since the US wouldn’t typically sabotage open source projects—and if it did, the National Security Agency would probably use a quantum-resistant cryptographic function, which ED448 is not. That leaves non-US groups with a history of supply chain attacks, Raiu suggests, like China’s APT41, North Korea’s Lazarus Group, and Russia’s APT29.

    At a glance, Jia Tan certainly looks East Asian—or is meant to. The time zone of Jia Tan’s commits are UTC+8: That’s China’s time zone, and only an hour off from North Korea’s. However, an analysis by two researchers, Rhea Karty and Simon Henniger, suggests that Jia Tan may have simply changed the time zone of their computer to UTC+8 before every commit. In fact, several commits were made with a computer set to an Eastern European or Middle Eastern time zone instead, perhaps when Jia Tan forgot to make the change.

    “Another indication that they are not from China is the fact that they worked on notable Chinese holidays,” say Karty and Henniger, students at Dartmouth College and the Technical University of Munich, respectively. They note that Jia Tan also didn’t submit new code on Christmas or New Year’s. Boehs, the developer, adds that much of the work starts at 9 am and ends at 5 pm for Eastern European or Middle Eastern time zones. “The time range of commits suggests this was not some project that they did outside of work,” Boehs says.

    Though that leaves countries like Iran and Israel as possibilities, the majority of clues lead back to Russia, and specifically Russia’s APT29 hacking group, argues Dave Aitel, a former NSA hacker and founder of the cybersecurity firm Immunity. Aitel points out that APT29—widely believed to work for Russia’s foreign intelligence agency, known as the SVR—has a reputation for technical care of a kind that few other hacker groups show. APT29 also carried out the Solar Winds compromise, perhaps the most deftly coordinated and effective software supply chain attack in history. That operation matches the style of the XZ Utils backdoor far more than the cruder supply chain attacks of APT41 or Lazarus, by comparison.

    “It could very well be someone else,” says Aitel. “But I mean, if you’re looking for the most sophisticated supply chain attacks on the planet, that’s going to be our dear friends at the SVR.”

    Security researchers agree, at least, that it’s unlikely that Jia Tan is a real person, or even one person working alone. Instead, it seems clear that the persona was the online embodiment of a new tactic from a new, well-organized organization—a tactic that nearly worked. That means we should expect to see Jia Tan return by other names: seemingly polite and enthusiastic contributors to open source projects, hiding a government’s secret intentions in their code commits.

    Updated 4/3/2024 at 12:30 pm ET to note the possibility of Israeli or Iranian involvement.

    [ad_2]

    Andy Greenberg, Matt Burgess

    Source link

  • The XZ Backdoor: Everything You Need to Know

    The XZ Backdoor: Everything You Need to Know

    [ad_1]

    On Friday, a lone Microsoft developer rocked the world when he revealed a backdoor had been intentionally planted in XZ Utils, an open source data compression utility available on almost all installations of Linux and other Unix-like operating systems. The person or people behind this project likely spent years on it. They were likely very close to seeing the backdoor update merged into Debian and Red Hat, the two biggest distributions of Linux, when an eagle-eyed software developer spotted something fishy.

    “This might be the best-executed supply chain attack we’ve seen described in the open, and it’s a nightmare scenario: malicious, competent, authorized upstream in a widely used library,” software and cryptography engineer Filippo Valsorda said of the effort, which came frightfully close to succeeding.

    Researchers have spent the weekend gathering clues. Here’s what we know so far.

    What Is XZ Utils?

    XZ Utils is nearly ubiquitous in Linux. It provides lossless data compression on virtually all Unix-like operating systems, including Linux. XZ Utils provides critical functions for compressing and decompressing data during all kinds of operations. XZ Utils also supports the legacy .lzma format, making this component even more crucial.

    What Happened?

    Andres Freund, a developer and engineer working on Microsoft’s PostgreSQL offerings, was recently troubleshooting performance problems a Debian system was experiencing with SSH, the most widely used protocol for remotely logging in to devices over the Internet. Specifically, SSH logins were consuming too many CPU cycles and were generating errors with valgrind, a utility for monitoring computer memory.

    Through sheer luck and Freund’s careful eye, he eventually discovered the problems were the result of updates that had been made to XZ Utils. On Friday, Freund took to the Open Source Security List to disclose the updates were the result of someone intentionally planting a backdoor in the compression software.

    What Does the Backdoor Do?

    Malicious code added to XZ Utils versions 5.6.0 and 5.6.1 modified the way the software functions when performing operations related to .lzma compression or decompression. When these functions involved SSH, they allowed for malicious code to be executed with root privileges. This code allowed someone in possession of a predetermined encryption key to log in to the backdoored system over SSH. From then on, that person would have the same level of control as any authorized administrator.

    How Did This Backdoor Come to Be?

    It would appear that this backdoor was years in the making. In 2021, someone with the username JiaT75 made their first known commit to an open source project. In retrospect, the change to the libarchive project is suspicious, because it replaced the safe_fprint funcion with a variant that has long been recognized as less secure. No one noticed at the time.

    The following year, JiaT75 submitted a patch over the XZ Utils mailing list, and, almost immediately, a never-before-seen participant named Jigar Kumar joined the discussion and argued that Lasse Collin, the longtime maintainer of XZ Utils, hadn’t been updating the software often or fast enough. Kumar, with the support of Dennis Ens and several other people who had never had a presence on the list, pressured Collin to bring on an additional developer to maintain the project.

    In January 2023, JiaT75 made their first commit to XZ Utils. In the months following, JiaT75, who used the name Jia Tan, became increasingly involved in XZ Utils affairs. For instance, Tan replaced Collins’ contact information with their own on oss-fuzz, a project that scans open source software for vulnerabilities that can be exploited. Tan also requested that oss-fuzz disable the ifunc function during testing, a change that prevented it from detecting the malicious changes Tan would soon make to XZ Utils.

    In February of this year, Tan issued commits for versions 5.6.0 and 5.6.1 of XZ Utils. The updates implemented the backdoor. In the following weeks, Tan or others appealed to developers of Ubuntu, Red Hat, and Debian to merge the updates into their OSes. Eventually, one of the two updates made its way into several releases, according to security firm Tenable. There’s more about Tan and the timeline here.

    Can You Say More About What This Backdoor Does?

    In a nutshell, it allows someone with the right private key to hijack sshd, the executable file responsible for making SSH connections, and from there to execute malicious commands. The backdoor is implemented through a five-stage loader that uses a series of simple but clever techniques to hide itself. It also provides the means for new payloads to be delivered without major changes being required.

    Multiple people who have reverse-engineered the updates have much more to say about the backdoor. Developer Sam James provided an overview here.

    [ad_2]

    Dan Goodin, Ars Technica

    Source link

  • Inside the Creation of the World’s Most Powerful Open Source AI Model

    Inside the Creation of the World’s Most Powerful Open Source AI Model

    [ad_1]

    This past Monday, about a dozen engineers and executives at data science and AI company Databricks gathered in conference rooms connected via Zoom to learn if they had succeeded in building a top artificial intelligence language model. The team had spent months, and about $10 million, training DBRX, a large language model similar in design to the one behind OpenAI’s ChatGPT. But they wouldn’t know how powerful their creation was until results came back from the final tests of its abilities.

    “We’ve surpassed everything,” Jonathan Frankle, chief neural network architect at Databricks and leader of the team that built DBRX, eventually told the team, which responded with whoops, cheers, and applause emojis. Frankle usually steers clear of caffeine but was taking sips of iced latte after pulling an all-nighter to write up the results.

    Databricks will release DBRX under an open source license, allowing others to build on top of its work. Frankle shared data showing that across about a dozen or so benchmarks measuring the AI model’s ability to answer general knowledge questions, perform reading comprehension, solve vexing logical puzzles, and generate high-quality code, DBRX was better than every other open source model available.

    AI decision makers: Jonathan Frankle, Naveen Rao, Ali Ghodsi, and Hanlin Tang.Photograph: Gabriela Hasbun

    It outshined Meta’s Llama 2 and Mistral’s Mixtral, two of the most popular open source AI models available today. “Yes!” shouted Ali Ghodsi, CEO of Databricks, when the scores appeared. “Wait, did we beat Elon’s thing?” Frankle replied that they had indeed surpassed the Grok AI model recently open-sourced by Musk’s xAI, adding, “I will consider it a success if we get a mean tweet from him.”

    To the team’s surprise, on several scores DBRX was also shockingly close to GPT-4, OpenAI’s closed model that powers ChatGPT and is widely considered the pinnacle of machine intelligence. “We’ve set a new state of the art for open source LLMs,” Frankle said with a super-sized grin.

    Building Blocks

    By open-sourcing, DBRX Databricks is adding further momentum to a movement that is challenging the secretive approach of the most prominent companies in the current generative AI boom. OpenAI and Google keep the code for their GPT-4 and Gemini large language models closely held, but some rivals, notably Meta, have released their models for others to use, arguing that it will spur innovation by putting the technology in the hands of more researchers, entrepreneurs, startups, and established businesses.

    Databricks says it also wants to open up about the work involved in creating its open source model, something that Meta has not done for some key details about the creation of its Llama 2 model. The company will release a blog post detailing the work involved to create the model, and also invited WIRED to spend time with Databricks engineers as they made key decisions during the final stages of the multimillion-dollar process of training DBRX. That provided a glimpse of how complex and challenging it is to build a leading AI model—but also how recent innovations in the field promise to bring down costs. That, combined with the availability of open source models like DBRX, suggests that AI development isn’t about to slow down any time soon.

    Ali Farhadi, CEO of the Allen Institute for AI, says greater transparency around the building and training of AI models is badly needed. The field has become increasingly secretive in recent years as companies have sought an edge over competitors. Opacity is especially important when there is concern about the risks that advanced AI models could pose, he says. “I’m very happy to see any effort in openness,” Farhadi says. “I do believe a significant portion of the market will move towards open models. We need more of this.”

    [ad_2]

    Will Knight

    Source link

  • The Dark Side of Open Source AI Image Generators

    The Dark Side of Open Source AI Image Generators

    [ad_1]

    Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

    “It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

    After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

    “Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

    Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

    But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

    Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

    4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

    That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.

    [ad_2]

    Lydia Morrish

    Source link

  • Singapore-based fintech Xalts acquires digital trade platform Contour Network | TechCrunch

    Singapore-based fintech Xalts acquires digital trade platform Contour Network | TechCrunch

    [ad_1]

    In a role reversal, Xalts, a Singapore fintech startup founded 18 months ago, has acquired Contour Network, a digital trade platform set up by eight major banks including HSBC, Standard Chartered and BNP. Terms of the deal were undisclosed, but the acquisition price was in the high single millions and composed of cash and stock.

    Backed by Accel and Citi Ventures, Xalts enables financial institutions to build and manage blockchain-based apps. Contour was started in 2017 by a consortium of eight banks to digitize trade and is currently used by 22 banks and more than 100 global business including Tata Group, Rio Tinto and SAIC.

    Xalts was founded in 2022 by Ashutosh Goel and Supreet Kaur, who previously held senior executive positions at HSBC and Meta, respectively. Kaur tells TechCrunch that they launched Xalts because large financial institutions and businesses often don’t have a single process to handle all their financial products, like corporate loans, issuing a letter of credit or bank guarantee. Instead, they’re handled by different teams both inside and outside of their organizations. For example, if a commercial bank issues a loan to a corporations, different teams work on KYC, onboarding, risk, compliance and issuance.

    If a financial institution decides to build applications to make the process more efficient, they usually ask their IT teams or external software service providers, but that can cost a lot of money and take months. Xalts’ goal is to let businesses build their own apps and share them not only within their organization, but also outside.

    Xalts founders Supreet Kaur and Ashutosh Goel

    The startup plans to turn Contour into a rail connecting banks, corporations and other institutions, and integrate it with Xalts’ platform. Kaur says this will enable Xalts’ clients to not only build apps, but also connect with each other in a secure and compliant way. It will focus first on enabling banks and logistics companies to offer embedded trade and supply chain apps on a single platform to their customers.

    Global trade is expected to hit $30 trillion by 2030, but traders still have to deal with a lot of friction. Transactions often take a lot of time as everyone involved, including importers, exporters, banks, logistics companies and customs, swap information in a mostly manual process.

    Kaur says Xalts’ biggest growth area is enabling banks to be more connected with corporate customers and offering B2B finance solutions, including trade finance and lending. An example she gives is a global fast fashion conglomerate with vendors in Vietnam and Bangladesh. Even if the conglomerate’s bank isn’t present in those countries, it can help vendors access financing through a one-click solution on its internal vendor portfolio by using Xalts to build an integrated apps.

    [ad_2]

    Catherine Shu

    Source link

  • Open source vector database startup Qdrant raises $28M | TechCrunch

    Open source vector database startup Qdrant raises $28M | TechCrunch

    [ad_1]

    Qdrant, the company behind the eponymous open source vector database, has raised $28 million in a Series A round of funding led by Spark Capital.

    Founded in 2021, Berlin-based Qdrant is seeking to capitalize on the burgeoning AI revolution, targeting developers with an open source vector search engine and database — an integral part of generative AI, which requires relationships be drawn between unstructured data (e.g. text, images or audio that isn’t labelled or otherwise organized), even when that data is “dynamic” within real-time applications. As per Gartner data, unstructured data makes up around 90% of all new enterprise data, and is growing three times faster than its structured counterpart.

    The vector database realm is hot. In recent months we’ve seen the likes of Weaviate raise $50 million for its open source vector database, while Zilliz secured secured $60 million to commercialize the Milvus open source vector database. Elsewhere, Chroma secured $18 million in seed funding for a similar proposition, while Pinecone nabbed $100 million for a proprietary alternative.

    Qdrant, for its part, raised $7.5 million last April, further highlighting the seemingly insatiable appetite investors have for vector databases — while also pointing to a planned growth spurt on Qdrant’s part.

    “The plan was to go into the next fundraising in the second quarter this year, but we received an offer a few months earlier and decided to save some time and start scaling the company now,” Qdrant CEO and co-founder Andre Zayarni explained to TechCrunch. “Fundraising and hiring of right people always takes time.”

    Of note, Zayarni says that the company actually rebuffed a potential acquisition offer from a “major database market player” at the same time of receiving a follow-on investment offer. “We went with the investment,” he said, adding that they’ll use the fresh cash injection to build out its business team, given that the company substantively consists of engineers at the moment.

    Binary logic

    In the intervening nine months since its last raise, Qdrant has launched a new super-efficient compression technology called binary quantization (BQ), focused on low-latency, high-throughput indexing which it says can reduce memory consumption by as much as 32 times and enhance retrieval speeds by around 40 times.

    “Binary quantization is a way to ‘compress’ the vectors to simplest possible representation with just zeros and ones,” Zayarni said. “Comparing the vectors becomes the simplest CPU instruction — this makes it possible to significantly speed up the queries and save dramatically on memory usage. The theoretical concept is not new, but we implemented it the way that there is very little loss of accuracy.”

    BQ might not work for all all AI models though, and it’s entirely up to the user to decide with compression option will work best for their use-cases — but Zayarni says that the best results they found were with OpenAI’s models, while Cohere also worked well as did Google’s Gemini. The company is currently benchmarking against models from the likes of Mistral and Stability AI.

    It’s such endeavors that have helped attract high-profile adopters, including Deloitte, Accenture, and — arguably the highest profile of them all — X (née Twitter). Or perhaps more accurately, Elon Musk’s xAI, a company developing the ChatGPT competitor Grok and which debuted on the X platform last month.

    While Zayarni didn’t disclose any details of how X or xAI was using Qdrant due to a non-disclosure agreement (NDA), it’s reasonable to assume that it’s using Qdrant to process real-time data. Indeed, Grok uses a generative AI model dubbed Grok-1 trained on data from the web and feedback from humans, and given its (now) tight alignment with X, it can incorporate real-time data from social media posts into its responses — this is what is known today as retrieval augmented generation (RAG), and Elon Musk has teased such use-cases publicly over the past few months.

    Qdrant doesn’t reveal which of its customers are using the open source Qdrant incarnation and which are using its managed services, but it did point to a number of startups, such as GitBook, VoiceFlow, and Dust, which are “mostly” using its managed cloud service — this, effectively, saves resource-restricted companies from having to manage and deploy everything themselves as they would have to with the core open source incarnation.

    However, Zayarni is adamant that the company’s open source credentials are one of the major selling points, even if a company elects to pay for add-on services.

    “When using a proprietary or cloud-only solution, there is always a risk of vendor lock-in,” Zayarni said. “If the vendor decides to adjust the pricing, or change other terms, customers need to agree or consider a migration to an alternative, which isn’t easy if it’s a heavy-production use-case. With open source, there is always more control over your data, and it is possible to switch between different deployment options.”

    Alongside the funding today, Qdrant is also officially releasing its managed “on-premise” edition, giving enterprises the option to host everything internally but tap the premium features and support provided by Qdrant. This follows last week’s news that Qdrand’s cloud edition was landing on Microsoft Azure, adding to the existing AWS and Google Cloud Platform support.

    Aside from lead backer Spark Capitali, Qdrant’s Series A round included participation from Unusual Ventures and 42cap.

    [ad_2]

    Paul Sawers

    Source link

  • Bitcoin Gaming Company ZEBEDEE Launches Open-Source Bitcoin Development Non-Profit

    Bitcoin Gaming Company ZEBEDEE Launches Open-Source Bitcoin Development Non-Profit

    [ad_1]

    ZEBEDEE, a bitcoin gaming company, has announced No Big Deal (NDB), a non-profit dedicated to furthering open source development for Bitcoin and the Lightning Network, per a release sent to Bitcoin Magazine.

    “NBD does not sell anything, it does not offer services, it does not support products,” said Andre Neves, co-founder and CTO of ZEBEDEE. “It just writes code and gives it to the world to do with it as they will.”

    Currently, NBD has already contributed to a number of projects. For instance, the non-profit provided code for Open Bitcoin Wallet, which is an advanced non-custodial Lightning wallet that can support hosted channels.

    [ad_2]

    Shawn Amick

    Source link

  • Tux Paint 0.9.27 Released for Windows, macOS, Android, and Linux

    Tux Paint 0.9.27 Released for Windows, macOS, Android, and Linux

    [ad_1]

    Press Release


    Nov 28, 2021

    The Tux Paint development team is proud to announce version 0.9.27 of Tux Paint, which adds many new features to the popular children’s drawing program.

    Six new Magic tools have been added to Tux Paint. “Panels” shrinks and duplicates the drawing into a 2-by-2 grid, which is useful for making four-panel comics. “Opposite” produces complementary colors. “Lightning” interactively draws a lightning bolt. “Reflection” creates a lake-like reflection on the drawing. “Stretch” stretches and squashes the picture like a fun-house mirror. Lastly, “Smooth Rainbow” provides a more gradual variation of Tux Paint’s classic “Rainbow” tool.

    A number of existing Magic tools have been updated, as well. Improvements were made to “Halftone,” which simulates photographs on newsprint; “Cartoon,” which makes an image look like a cartoon drawing; and “TV,” which simulates a television screen. Additionally, “Cartoon” and “Halftone,” along with “Blocks,” “Chalk,” and “Emboss,” now offer the ability to alter the entire image at once. Finally, Magic tools are now grouped into collections of similar effects — painting, distorts, color filters, picture warps, pattern painting, artistic, and picture decorations — making it easier to find the tool you need.

    Tux Paint’s Paint and Line tools now support brushes that rotate based on the angle of the stroke. This new rotation feature, as well as the older directional and animated brush features, are now visually indicated by the brush shape selector. Additionally, the Fill tool now offers a freehand painting mode for interactively coloring within a confined area.

    Tux Paint Config., the separate program that ships with Tux Paint to provide a user-friendly method of altering the program’s settings, has been updated to better support larger, high-resolution displays. Also, this version introduces support for the Recycle Bin on Windows — images deleted from Tux Paint’s “Open” dialog will now be placed in the Recycle Bin rather than deleted immediately.

    The Tux Paint website now hosts a new gallery showcasing fantastic artwork created by Tux Paint artists of all ages. The gallery features over 200 drawings by artists from all around the world.

    Tux Paint is available for download, free of charge, from the project’s website: http://www.tuxpaint.org. Version 0.9.27 is currently available for Microsoft Windows, Apple macOS, Android, Red Hat Linux, various other Linux distributions (via Flatpak), and as source code. Tux Paint is open source software and does not contain in-app advertising.

    Source: Tux Paint

    [ad_2]

    Source link

  • Equinox Launches New Website Featuring Open Source Library Products, Services, and Education

    Equinox Launches New Website Featuring Open Source Library Products, Services, and Education

    [ad_1]

    Equinox Open Library Initiative celebrates 15 years as a small business delivering ‘Extraordinary Service. Exceptional Value’ to libraries worldwide

    Press Release



    updated: Apr 20, 2021

    Equinox Open Library Initiative, Inc. proudly announces the launch of its newly designed website https://www.equinoxOLI.org, featuring open source library products, services, and educational resources. Equinox Open Library Initiative, the successor to Equinox Software, Inc., celebrates 15 years as a small business delivering “Extraordinary Service. Exceptional Value” to libraries worldwide. Equinox provides innovative open source software for libraries and consortia of all types, serving academic, public, school, corporate, cultural, and government organizations. The new website serves as the central place for current news from Equinox, information about open source library software, including Evergreen, Koha, Fulfillment, and CORAL, and details and announcements regarding Equinox’s grants, programs, and community events.

    “When you choose Equinox, you’re choosing a mission-driven small business with a proven record of technical expertise and outstanding service,” said Lisa Carlucci, Executive Director. “As we launch the new website and celebrate this important milestone, we are deeply grateful to the libraries, consortia, and community partners who have trusted Equinox to provide best-in-class library technologies.” 

    In addition to open source products, Equinox offers library consulting, training, and technology services. Consulting topics include workflow analysis, process improvement, consortial policy evaluation and management, web design, custom training sessions and workshops, IT services and support, and data services.

    “Our new website highlights our services and programs contributing to library open source software and infrastructure,” said Galen Charlton, Implementation and IT Manager at Equinox. “We hope that libraries and community members find it useful as a hub for finding open source resources and learning more about Equinox.” 

    Follow Equinox Open Library Initiative on Facebook, Twitter, LinkedIn and Vimeo for the latest updates. 

    To receive news directly in your inbox: https://www.equinoxOLI.org/#signup

    For more information:

    Laura Barry
    Communications Coordinator
    Equinox Open Library Initiative, Inc.
    laura.barry@equinoxOLI.org
    877.OPEN.ILS (877.673.6457)

    About Equinox Open Library Initiative

    Equinox Open Library Initiative provides innovative open source software for libraries of all types and delivers extraordinary service at exceptional value. As the successor to Equinox Software, Inc., Equinox Open Library Initiative builds upon more than a decade of trusted service and technical expertise, providing consulting services, software development, hosting, training, and support for Evergreen ILS, Koha ILS, and other open source library software. To learn more, please visit https://www.equinoxOLI.org. For Equinox Library Services Canada, please visit https://www.equinoxOLI.ca.

    Source: Equinox Open Library Initiative

    [ad_2]

    Source link