ReportWire

Tag: bug bounty

  • Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits

    [ad_1]

    Since launching its bug bounty program nearly a decade ago, Apple has always touted notable maximum payouts—$200,000 in 2016 and $1 million in 2019. Now the company is upping the stakes again. At the Hexacon offensive security conference in Paris on Friday, Apple vice president of security engineering and architecture Ivan Krstić announced a new maximum payout of $2 million for a chain of software exploits that could be abused for spyware.

    The move reflects how valuable exploitable vulnerabilities can be within Apple’s highly protected mobile environment—and the lengths the company will go to to keep such discoveries from falling into the wrong hands. In addition to individual payouts, the company’s bug bounty also includes a bonus structure, adding additional awards for exploits that can bypass its extra secure Lockdown Mode as well as those discovered while Apple software is still in its beta testing phase. Taken together, the maximum award for what would otherwise be a potentially catastrophic exploit chain will now be $5 million. The changes take effect next month.

    “We are lining up to pay many millions of dollars here, and there’s a reason,” Krstić tells WIRED. “We want to make sure that for the hardest categories, the hardest problems, the things that most closely mirror the kinds of attacks that we see with mercenary spyware—that the researchers who have those skills and abilities and put in that effort and time can get a tremendous reward.”

    Apple says that there are more than 2.35 billion of its devices active around the world. The company’s bug bounty was originally an invite-only program for prominent researchers, but since opening to the public in 2020, Apple says that it has awarded more than $35 million to more than 800 security researchers. Top-dollar payouts are very rare, but Krstić says that the company has made multiple $500,000 payouts in recent years.

    In addition to higher potential rewards, Apple is also expanding the bug bounty’s categories to include certain types of one-click “WebKit” browser infrastructure exploits as well as wireless proximity exploits carried out with any type of radio. And there is even a new offering known as “Target Flags” that puts the concept of capture the flag hacking competitions into real-world testing of Apple’s software to help researchers demonstrate the capabilities of their exploits quickly and definitively.

    Apple’s bug bounty is just one of many long-term investments aimed at reducing the prevalence of dangerous vulnerabilities or blocking their exploitation. For example, after more than five years of work, the company announced a security protection last month in the new iPhone 17 lineup that aims to nullify the most frequently exploited class of iOS bugs. Known as Memory Integrity Enforcement, the feature is a big swing aimed at protecting a small minority of the most vulnerable and highly targeted groups around the world—including activists, journalists, and politicians—while also adding defense for all users of new devices. To that end, the company announced on Friday that it will donate a thousand iPhone 17s to rights groups that work with people at risk of facing targeted digital attacks.

    “You can say, well, that seems like a very large effort to protect only that very small number of users that are being targeted by mercenary spyware, but there is just this incontrovertible track record described by journalists, tech companies, and civil society organizations that these technologies are constantly being abused,” Krstić says. “And we feel a great moral obligation to defend those users. Despite the fact that the vast majority of our users will never be targeted by anything like this, this work that we did will end up increasing protection for everyone.”

    [ad_2]

    Lily Hay Newman

    Source link

  • Google adds generative AI threats to its bug bounty program | TechCrunch

    Google adds generative AI threats to its bug bounty program | TechCrunch

    [ad_1]

    Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI.

    In an announcement shared with TechCrunch ahead of publication, Google said: “We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone,” 

    Google’s vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws. 

    Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. 

    The tech giant says it’s doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.

    The team found that large language models (or LLMs) are vulnerable to prompt injection attacks, for example, whereby a hacker crafts adversarial prompts that can influence the behavior of the model. An attacker could use this type of attack to generate text that is harmful or offensive or to leak sensitive information. They also warned of another type of attack called training-data extraction, which allows hackers to reconstruct verbatim training examples to extract personally identifiable information or passwords from the data. 

    Both of these types of attacks are covered in the scope of Google’s expanded VRP, along with model manipulation and model theft attacks, but Google says it will not offer rewards to researchers who uncover bugs related to copyright issues or data extraction that reconstructs non-sensitive or public information.

    The monetary rewards will vary on the severity of the vulnerability discovered. Researchers can currently earn $31,337 if they find command injection attacks and deserialization bugs in highly sensitive applications, such as Google Search or Google Play. If the flaws affect apps that have a lower priority, the maximum reward is $5,000.

    Google says that it paid out more than $12 million in rewards to security researchers in 2022. 

    [ad_2]

    Carly Page

    Source link