ReportWire

Tag: defense tech

  • What Would the First Week of World War III Look Like in Space?

    [ad_1]

    The idea of waging war in orbit is no longer a figment of science fiction. As satellite technologies and launch capabilities have rapidly advanced, military powers increasingly see space as the ultimate high ground. But if World War III really does spill off-planet, what will the outbreak look like?

    For this Giz Asks, we asked several experts how they picture the first week of World War III in space, and apparently things could get really bad, really fast. They warned that cyberattacks, strikes on satellites, and assaults on ground infrastructure would lead to global logistical chaos and debris-filled orbits.

    Scott Shackelford

    Provost professor of business law and ethics and vice chancellor for research at Indiana University-Bloomington. His areas of expertise include cyber security and privacy, international law and relations, property, and sustainability.

    Here is how I envision the first week of World War III in space.

    The first 48 hours wouldn’t start with a “bang” but likely with a “glitch.” We often talk about the Internet of Space, and just like the terrestrial web, the opening moves would be almost entirely cyber-based for purposes of plausible deniability and given the asymmetric threat.

    You’d see massive, coordinated DDoS [Distributed Denial-of-Service] attacks on ground stations and sophisticated “spoofing” of GPS signals [deliberate manipulations of signals transmitted by GPS]. Before a single kinetic weapon is launched, the goal would be to blind the adversary. Imagine the chaos on Earth: global logistics chains freeze, high-frequency trading halts, and your Uber app—along with military drone arrays—suddenly thinks it’s in the middle of the Pacific Ocean. In other words, global chaos could quickly ensue, driving distrust and undermining confidence.

    By day three or four, we move from soft interference to hard disruption. This is where the legal and ethical “grey zones” I study become a literal battlefield. We’d likely see the use of directed-energy weapons (lasers) to “dazzle” or permanently blind reconnaissance satellites. The most contentious issue here will be the commercial sector.

    In a modern space war, companies like SpaceX are no longer bystanders; they are essential military infrastructure (SpaceX even has a ’StarShield’ infrastructure). The first week would force a series of legal questions: When does an attack on a private satellite constitute an act of war against its host nation?

    If the conflict escalates to kinetic anti-satellite (ASAT) missiles by day six or seven, we face the “Tragedy of the Space Commons” on a galactic scale. A single destroyed satellite creates a cloud of thousands of high-speed projectiles.

    In a “hot” space war, we risk the Kessler Syndrome—a chain reaction of collisions that could render specific orbits, like Low Earth Orbit (LEO), unusable for a prolonged period of time. We wouldn’t just be fighting a war; we’d be building a prison of shrapnel around our own planet. Think Wall-E, just a lot more depressing. Much of the resulting junk would burn up readily but others in GSO and otherwise could contribute to an already vexing problem.

    We are far better at creating messes in space than we are at cleaning them up, and our current international legal frameworks—like the 1967 Outer Space Treaty—are unprepared for a world where the “final frontier” becomes a shooting gallery.

    Wendy Whitman Cobb

    Space policy expert whose research focuses on the political and institutional dynamics of space policy, public opinion of space exploration, and the influence of commerce on potential space conflict.

    War in space, whether in the context of World War III or otherwise, is intimately linked to war on Earth. Nothing in space is done for space’s sake but to enable (or disable as the case may be) terrestrial operations or advantages. So if there is a World War III going on on the ground—complete with the existential threats to national survival that we might expect to accompany it—we expect similar results in outer space.

    What exactly this would look like depends on the countries involved and what space capabilities they possess. For the purposes of this question, I’ll assume that the United States, Russia, and China are all involved in the war. If this is the case, we can expect actual attacks on space assets.

    This would include kinetic attacks such as anti-satellite attacks (originating from the ground and on orbit) and non-kinetic attacks such as jamming, lasing, and blinding that would render satellites either permanently or temporarily disabled. We might also see cyber attacks on the computer systems necessary to operate space systems along with ground attacks on the terrestrial segments of space infrastructure (satellite downlink stations, launching facilities, etc.).

    The goal of such attacks would be to disrupt operations on the ground and prevent the major combatants from being able to better see what is happening, communicate, or utilize the technologically advanced kill chains that depend on space-based systems to locate and destroy ground-based targets.

    The consequences of such actions would not only be a complete disruption of space-based systems, but potentially significant damage to the space environment itself.  Kinetic attacks create dangerous debris that could then hit other satellites, disabling or destroying them. Were a nuclear anti-satellite weapon used, it would indiscriminately destroy whatever satellites were in its vicinity.

    The result of such things would be to make certain orbits or areas around Earth all but useless because of debris clouds. The danger of creating harmful debris is one factor that we believe tends to tamp down on open conflict in space, but if we’re talking about World War III, that is likely to be little use as a deterrent opening the door to attacks and reprisals that could ultimately result in rendering all space systems either useless or significantly degraded.

    Bottom line: World War III would be disastrous for those of us on Earth. It would ultimately be reflected in outer space as well.

    Peter W. Singer

    Strategist and senior fellow at the think-tank New America, professor of practice at Arizona State University, and founder and managing partner at Useful Fiction LLC, a company specializing in strategic narrative. His book Ghost Fleet explores the future of war and space.

    The initial phase of a conflict extending into space will likely involve silent battles in a realm where humanity has never before fought. Satellites—which underpin both our economies and military systems—could be targeted by peer satellites, rockets, lasers, and cyber attacks. Yet, despite the spectacular nature of orbital warfare, the ultimate victor may well be determined by two critical aspects rooted right here on planet Earth.

    Rather than “heavens above,” the actual center of gravity in space operations remains the ground stations, fiber nodes, and undersea cables that facilitate space-based data. This means that space conflict might also see conventional and special operations task forces hitting key infrastructure, “global raids” targeting the terrestrial networks that bind the stars to the mud.

    As this infrastructure is global, it might take place not just in the region of conflict, but around the world, in places like South America or East Africa or even in Antarctica. The goal is to strip away an adversary’s space-dependent advantages—GPS, precision timing, and secure comms—at the source.

    The second aspect of space warfare that may well determine the conflict is the ability to get back into space. This involves not just launch infrastructure but resilient satellite production and inventory. If you want to win in space, you will need mastery of reusable rockets and a robust logistics backbone, allowing for the rapid replenishment of satellite constellations that have been blinded or neutralized.

    The victor of the next war in space won’t necessarily be the side with the largest or most expensive satellites. It will be the one that successfully maintains its terrestrial links and orbital replenishment cadence. As such, don’t think of space as a static sanctuary; it is a dynamic maneuver space where the fight on Earth determines the conflict among the stars.

    [ad_2]

    Ellyn Lapointe

    Source link

  • African defensetech Terra Industries, founded by two Gen Zers, raises additional $22M in a month | TechCrunch

    [ad_1]

    Just one month after raising $11.75 million in a round led by Joe Lonsdale’s 8VC, African defensetech Terra Industries announced that it’s raised an additional $22 million in funding, led by Lux Capital.

    Nathan Nwachuku, 22, and Maxwell Maduka, 24, launched Terra Industries in 2024 to design infrastructure and autonomous systems to help African nations monitor and respond to threats. 

    Terrorism remains one of the biggest threats in Africa, but much of the security intelligence on which its nations rely on come from Russia, China, or the West. In January, CEO Nwachuku said his goal was to build “Africa’s first defense prime, to build autonomous defense systems and other systems to protect our critical infrastructure and resources from armed attacks.” 

    At the time, Terra had just won its first federal contract. The company has government and commercial clients, and Nwachuku said Terra had already generated more than $2.5 million in commercial revenue and was protecting assets valued at around $11 billion. 

    He said this extension round came fast due to “strong momentum.” Other investors in the round include 8VC, Nova Global, and Resiliience17 Capital, which was founded by Flutterwave CEO Olugbenga Agboola. Nwachuku said investors saw “faster-than-expected traction” regarding deals and partnerships, which created urgency to preempt and increase their commitment. The round came about in just under two weeks, bringing the company’s total funding to $34 million.

    Image Credits:Terra Industries

    The extended raise is not that surprising. Afterall, building a defense company is not cheap. For comparison, Anduril has raised more than $2.5 billion in funding; ShieldAI has raised around $1 billion in equity; drone maker Skydio has raised around $740 million, and naval autonomous vessel maker Saronic, has raised around $830 million

    Since January, Nwachuku said the company has started expanding into other African nations yet to be announced (Terra is based in Nigeria), and has secured more government and commercial contracts, including with AIC Steel, with more to be revealed this year. 

    Techcrunch event

    Boston, MA
    |
    June 23, 2026

    The partnership with AIC Steel lets Terra establish a joint manufacturing facility in Saudi Arabia focused on building surveillance infrastructure and security systems. “It’s our first major manufacturing expansion outside Africa,” he said.

    “The priority is working with countries where terrorism and infrastructure security are major national concerns,” Nwachuku added, citing those falling within the sub-Saharan African and Sahel region in particular. He said many of these companies have not only lost billions in infrastructure, but also thousands of lives in the past few decades. 

    “We’re focused on targeting major economies where the need for infrastructure security is urgent and where our solutions can make a meaningful impact. That’s how we think about expansion.” 

    [ad_2]

    Dominic-Madori Davis

    Source link

  • As Russia Tests NATO’s Limits, Estonia’s Tech Scene Heats Up

    [ad_1]

    When Russia invaded Ukraine in February 2022, it came as little surprise to the  international community: leaders from various countries were warning weeks in advance that Vladimir Putin was poised to launch his attack, and made increasingly desperate public pleas to the Russian leader to step back from the brink—until it was too late. As for the Baltic states that border Russia? They told me they’d been expecting such an action for years, and had encountered meddling and mischief-making from Putin themselves.

    That’s partly why Estonia, whose eastern border abuts the west of Russia, has become Estonia has become an outsourced tech lab and factory for Ukraine’s frontline. 

    “If you are in war, your sense of urgency is different,” says Allan Martinson, a board member at the Estonian Founders’ Society, who has been around the country’s tech sector for 35 years. Martinson says that Estonia’s defense tech sector has coalesced in the last three years—since Russia crossed the border into Ukraine—and now accounts for around 10 percent of the total Estonian tech sector in terms of revenues.

    The Ukrainian Connection 

    Around 150 companies operate in Estonia’s defense tech sector, and around a third of them are run by Ukrainians, says Martinson—many of whom are still based in their home country but have taken advantage of Estonia’s e-Residency program. (The program allows non-Estonian residents to set up companies in the country within minutes, thanks to its entirely digital government processes.)

    The influx of Ukrainians looking to launch tech startups designed to help keep their country safe in a neighboring country has been a “very interesting contribution” to the Estonian tech sector, says Martinson. For Ukrainians, Estonia offers a link to the European Union and NATO member states—Ukraine is currently a member of neither entity. In fact, Ukraine’s attempts to sign up to both blocs is part of Putin’s tenuous public justification for his war. 

    “If there are teams that are building in the trenches to beat Russia right now, when they have a moment to think about, ‘Okay, how do we work with our NATO partners? How do you sell to them?’ then Estonia makes a lot of sense. It’s nearby,” says Sten Tamkivi, a former Skype executive who is now a partner at Plural, an Estonian-based tech investment firm. 

    Plural recently invested in Helsing, a defense tech firm that initially developed AI battlefield software, but which expanded into building autonomous strike drones late last year. Tamkivi calls it “the biggest tech breakout story in European defense right now. Helsing’s eastern NATO flank operations are run through Estonia.

    The links are also deep between the two countries’ governments: the current advisor to Ukraine’s deputy prime minister on AI and digital transformation, Kristjan Ilves, is also the former chief information officer of the Estonian government.

    So while Russia’s movement through Ukraine has stalled thanks to international support, including Estonia, those on the streets of Tallinn and in its tech sector are prepared for any incursion that could come.

    “Mentally speaking, I don’t think if you ask people on the street today they will answer that they’re in war,” says Martinson. “But are they afraid of war? My own perception is that we recognize there is a danger, but we are also not afraid. We are preparing on a national level and individual level, with defense entrepreneurs.”

    Just days before Martinson spoke to me, Russia flew fighter jets over Estonian airspace for 12 minutes, reaching within seconds of the capital, Tallinn, before being escorted out of the country by scrambled NATO jets. But for now, Estonia is a comparatively safe third-party location for Ukrainian entrepreneurs to base their business and its infrastructure. 

    Jumping through hoops 

    One snag in all of this is a NATO policy that requires all defense tech to serve a dual purpose in order to receive NATO funding, and to be part of any NATO state’s supply chain. The need to pretend to have a dual use for technologies that can save lives or defend borders means that encrypted radio communications tech firms are pretending their products could have use in the mining industry in order to attract investors’ eye. 

     “People are inventing these fake use cases to leave an image that they’re going to go to civilian use cases where they really should be focusing on building what’s necessary right now,” says Tamkivi. “It’s like, ‘Okay, let’s do defense, but let’s at least stay safe. Let’s say that nobody gets hurt,’” he says. “It’s this irrational or unrealistic picture of what is going on,” Tamkivi adds.

    Still, Estonia, and NATO countries on the bloc’s eastern flank are more exposed than other NATO states to the problems that can come from a more belligerent Russia. So Estonians tend to believe that there needs to be more action taken to try and tackle the threat. 

    Ragnar Saas, co-founder of Estonian defense tech venture capital firm Darkstar, compares the sense of urgency to Ukraine, where innovations are coming thick and fast: “How fast tech is growing in defense is probably the fastest area I know,” he says. “Those guys in Ukraine work seven days a week, because you’re basically defending your home.”

    Saas, whose wife is Ukrainian, and who sends convoys of vehicles from Estonia to Ukraine to help the war effort there, is bullish about Ukraine’s future—in large part because of its tech prowess, backed up by friendly nations like Estonia. “The biggest and best strategy for how Ukraine will win is by their tech,” he says. “They’re developing new weapons systems.”

    The next front

    Christoph Kühn, the German deputy director of NATO’s Cooperative Cyber Defence Centre of Excellence, which is based in a square, squat building on the outskirts of the Estonian capital, says that in his personal belief NATO is already at war with Russia in the cybersphere. The secretary-general of the Estonian foreign ministry, Jonatan Vseviov, gave the same message: Estonia is at war with Russia already, and is willing to defend itself, including by mobilizing its tech sector.

    Estonia’s government is responding to that threat. “For Estonia, helping Ukraine as much as possible is very important,” says Estonian prime minister Kristen Michal, who has been in post for a little over a year. “It’s a priority.” So much so that in the days after Estonia was buzzed by Russian warplanes, his cabinet approved devoting 5 percent of its entire budget to defense.

    “I hope that we can be of assistance and contact with Ukraine, for their defensive industry to exchange intellectual property and different kinds of innovations which are happening there,” Michal says. “Conflicts are usually best for innovation,” he adds.

    The country punches above its weight when it comes to tech innovation: Its unicorns include Skype, Wise, Bolt and Playtech, a leading gambling tech firm. And Estonians believe that they can put that power of innovation to purposes that do more than just benefit people. They can help protect Europe – and themselves. 

    “Five years ago, in the whole of Europe, I would suspect that nobody was thinking about the defense tech industry as part of their defense capabilities,” says prime minister Michal. “But right now, after what is happening in Ukraine, they really say, ‘When something happens, we need things to be done here. So we need innovation here. We need things to do here.’”

    The prime minister’s word choice seems deliberate. At the minute, Estonia sees things as a case of ‘when’, not ‘if’.

    [ad_2]

    Chris Stokel-Walker

    Source link

  • The Destruction in Gaza Is What the Future of AI Warfare Looks Like

    [ad_1]

    In 2021, Israel used “the Gospel” for the first time. That was the codename for an AI tool deployed in the 11-day war against Gaza that the IDF has since deemed the first artificial intelligence war. The conclusion of that war didn’t end the conflict between Israel and Palestine, but it was a sign of things to come.

    The Gospel rapidly spews out a mounting list of potential buildings to target in military strikes by reviewing data from surveillance, satellite imagery, and social networks. That was four years ago, and the field of artificial intelligence has since experienced one of the most rapid periods of advancement in the history of technology.

    Marking two years on Tuesday, Israel’s latest offensive on Gaza has been called an “AI Human Laboratory” where the weapons of the future are tested on live subjects.

    Over the last two years, the conflict has claimed the lives of more than 67,000 Palestinians, upwards of 20,000 of whom were children. As of March 2025, more than 1,200 families were completely wiped out, according to a Reuters examination. Since October 2024, the number of casualties provided by the Palestinian Ministry of Health has only included identified bodies, so the real death toll is likely even higher.

    Israel’s actions in Gaza amount to a genocide, a UN Commission concluded last month.

    Hamas and Israel agreed to the first phase of a ceasefire deal that was announced on Wednesday, but Israeli strikes on Gaza were still continuing as of Thursday morning, according to Reuters. The agreed-upon plan involves the release of Israeli hostages by Hamas in exchange for 1,950 Palestinians taken by Israel and the long-awaited aid convoys. But it does not involve the creation of a Palestinian state, which Israel strictly opposes. On Friday afternoon, Israel said that the ceasefire agreement is now in effect, and President Trump has said there will be a hostage release next week. There have been at least three ceasefire agreements since October 7, 2023.

    Aiding Israel’s destruction in Gaza is an unprecedented reliance on artificial intelligence that is, at least partially, supplied by American tech giants. Israel’s use of AI in surveillance and wartime decisions has been documented and criticized time and again by various media and advocacy organizations over the years.

    “AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions. The stakes are higher in the case of military activity, as you’re now dealing with lethal targeting that impacts the life and death of individuals.”

    AI that generates kill lists

    Although Israel has not disclosed its intelligence software fully and denied some of the AI usage claims, numerous media and non-profit investigations paint a different picture.

    Also used in Israel’s 2021 campaign were two other programs called “Alchemist,” which sends real-time alerts for “suspicious movement,” and “Depth of Wisdom” to map out Gaza’s tunnel network. Both are reportedly in use this time around, as well.

    On top of the three programs Israel has previously openly owned up to using, the IDF also utilizes Lavender, an AI system that essentially generates a kill list of Palestinians. The AI calculates a percentage score for how likely a Palestinian is to be a member of a militant group. If the score is high, the person becomes the target of missile attacks.

    According to a report from Israeli magazine +972, the army “almost completely relied” on the system at least in the early weeks of the war, with full knowledge of the fact that it misidentified civilians as terrorists.

    The IDF required officers to approve any of the recommendations made by the AI systems, but according to +972, that approval process just checked whether or not the target was male.

    Many other AI systems that are in use by the IDF are still in the shadows. One of the few programs also unveiled is “Where’s Daddy?” which was built to strike targets inside their family homes, according to +972.

    “The IDF bombed [Hamas operatives] in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations,” an anonymous Israeli intelligence officer told +972.

    AI in surveillance

    The Israeli army also uses AI in its mass surveillance efforts. Yossi Sariel, who led the IDF’s surveillance unit until late last year when he resigned, citing failure to prevent the Oct 7. Hamas attack, spent a sabbatical year training at a Pentagon-funded defense institution in Washington, D.C., where he shared radical visions of AI on the battlefield, according to a professor at the institute who spoke to the Washington Post last year.

    A Guardian report from August found that Israel was storing and processing mobile phone calls made by Palestinians via Microsoft’s Azure Cloud Platform. After months of protests, Microsoft announced last month that it is cutting off access to some of its services provided to an IDF unit after an internal review found evidence that supported some of the claims in the Guardian article.

    Microsoft denies prior knowledge, but the Guardian report paints a different picture. Microsoft CEO Satya Nadella met with IDF’s spying operations head Sariel in late 2021 to discuss hosting intelligence material on the Microsoft cloud, the Guardian reported.

    “The vast majority of Microsoft’s contract with the Israeli military remains intact,” Hossam Nasr, an organizer with No Azure for Apartheid and a former Microsoft worker, told Gizmodo last month.

    When asked for comment, Microsoft directed Gizmodo to a previous statement the tech giant made on the ongoing internal investigation into how its products are used by Israel’s Ministry of Defense.

    On top of storing and combing through data, AI was used in translating and transcribing the gathered surveillance. But an internal Israeli audit, according to the Washington Post, found that some of the AI models that the IDF used to translate communications from Arabic had inaccuracies.

    An Associated Press investigation from earlier this year found that advanced AI models by OpenAI, purchased via Microsoft’s Azure, were used to transcribe and translate the intercepted communications. The investigation also found that the Israeli military’s use of OpenAI and Microsoft technology skyrocketed after Oct 7, 2023.

    AI-driven surveillance efforts don’t just target residents of Gaza and the West Bank, but they have also been used against pro-Palestinian protestors in the United States. An Amnesty International report from August found that AI products by American companies like Palantir were used by the Department of Homeland Security to target non-citizens who speak out for Palestinian rights.

    “Palantir has had federal contracts with DHS for fourteen years. DHS’s current engagement with Palantir is through Immigration and Customs Enforcement, where the company provides solutions for investigative case management and enforcement operations,” a DHS spokesperson told Gizmodo. “At the Department level, DHS looks holistically at technology and data solutions that can meet operational and mission demands.”

    Palantir has not yet responded to a request for comment.

    AI-driven accusations

    The proliferation of AI-generated video and images has done more than just flood the internet with slop. It has also caused widespread confusion for social media users over just what’s real and what’s fake. The confusion is understandable, but it has been co-opted to discredit the voices of the oppressed. In this case, too, Gazans have been at the receiving end of the attacks.

    The videos and photos coming out of Gaza are referred to in Israel as “Gazawood”, with many claiming that the images are staged or completely AI-generated. Since Israel has not allowed foreign journalists into Gaza and not only discredits but also disproportionately targets the enclave’s journalists in air strikes, the truth becomes harder to validate.

    In one instance, Saeed Ismail, a real 22-year-old Gazan who had been raising money online to feed his family, was accused of being AI-generated due to misspelled words on his blanket featured in one video. Gizmodo verified his existence in July.

    American big tech is leading the way

    While Israeli tech startups find a sizable market in the U.S. and deals with government agencies like ICE, the relationship goes both ways.

    It’s tough to precisely map out which American companies have fed the technology used to target and kill Palestinians. But what is available is which Big Tech companies proudly partner with the Israeli army. And the answer to that question is almost all of them.

    Microsoft has received much of the recent attention from activists, but Google, Amazon, and Palantir are considered some of the other top American third-party vendors for the IDF.

    Google and Amazon employees have been protesting for years over “Project Nimbus,” a $1.2 billion contract signed in 2021 that tasks the American tech giants with providing cloud computing and AI services to the Israeli military.

    Amazon suspended an engineer last month for emailing the CEO, Andy Jassy, about the project and speaking out against it in company Slack channels.

    Although Google has also clamped down on employee criticism, when the deal was signed in 2021, Google officials themselves raised concerns that the cloud services could be used for human rights violations against Palestinians, according to a 2024 New York Times report.

    The Israeli military also requested access to Google’s Gemini as recently as last November, according to a Washington Post report.

    Palantir, which offers software like the Artificial Intelligence Platform (AIP) that analyzes enemy targets and proposes battle plans, agreed to a strategic partnership with the IDF to supply its technology to “the current situation in Israel,” Palantir executive vice president Josh Harris told Bloomberg last year.

    Palantir has been under fire globally for its partnership with the Israeli army. Late last year, a major Norwegian investor sold all of its Palantir holdings due to concerns of international human rights law violations. The investing company said that an analysis indicated that Palantir aided an AI-based IDF system that ranked Palestinians based on the likelihood to launch “lone wolf terrorist” attacks, which then led to preemptive arrests.

    CEO Alex Karp has stood behind the company’s decision to back Israel in its war against Gazans many times.

    The IDF has also inked data center deals with Cisco and Dell, and a cloud computing deal with independent IBM subsidiary Red Hat.

    “IBM holds human rights and freedoms in the highest regard, and we are deeply committed to conducting our business with integrity, guided by our robust ethical standards,” IBM told Gizmodo. “As for the UN report, most of its claims are inaccurate and should not be treated as fact.”

    Cisco, Dell, Google, Amazon, and OpenAI did not respond to a request for comment.

    In August, the Washington Post unveiled a 38-page alleged plan for Gaza to become a U.S.-operated tech hub.

    Called the Gaza, Reconstitution, Economic Acceleration and Transformation Trust (or GREAT), the plan involves “temporarily relocating” the remaining two million or so Palestinians to build six to eight AI-powered smart cities, regional data centers to serve Israel, and something called “The Elon Musk Smart Manufacturing Zone.” The plan would convert Gaza into a “trusteeship” administered by the U.S. for at least 10 years.

    Future of AI warfare and surveillance

    AI companies want in on the battlefield.

    There is a huge demand by militaries around the globe for the AI systems provided by tech giants. America is pouring out millions of dollars to integrate AI systems into military decision-making, like identifying strike targets as part of its Thunderforge program. Chinese leader Xi Jinping has also reportedly made military artificial intelligence a top strategic priority.

    As the technology is still in its growing phase, the active war zones and the civilians living there become test subjects for AI-powered killing machines. Similar to Gaza, Ukraine has also been described as a real-time testing ground for AI-powered military technology. In that case, though, the Ukrainian government themselves are also on board with it.

    Over the summer, the Ukrainian military announced “Test in Ukraine,” a scheme that invites foreign arms companies to test out their latest weapons on the front lines of the Russia-Ukraine war.

    On top of its abundant deals with the Israeli army, Palantir is also very popular with the American Department of Defense. The company inked a $10 billion software and data contract with the U.S. Army in August.

    One could argue that profit will always override every other incentive, but even Palantir drew a line recently when asked to participate in a controversial UK digital identification program, arguing that the program needed to be “decided at the ballot box,” according to the Times.

    We’ve seen tech companies back away from military projects, like Project Maven, in the past when they felt the cultural winds blowing against them. For now, the Trump administration wants Americans leading the way on the AI battlefield. While external criticism and internal pressure from employees still exist at the biggest AI firms, they currently have a plausible argument that this is what the American people voted for. Until that changes, the gold rush for military funds will persist.

    [ad_2]

    Rhett Jones

    Source link

  • VC Trae Stephens says he has a bunker (and much more) in talk about Founders Fund and Anduril | TechCrunch

    VC Trae Stephens says he has a bunker (and much more) in talk about Founders Fund and Anduril | TechCrunch

    [ad_1]

    Last night, for an evening hosted by StrictlyVC, this editor sat down with Trae Stephens, a former government intelligence analyst turned early Palantir employee turned investor at Founders Fund, where Stephens has cofounded two companies of his own. One of these is Anduril, the buzzy defense tech company that is now valued at $8.4 billion by its investors. The other is Sol, which makes a single-purpose, $350 headset that weighs about the same as a pair of sunglasses and that is focused squarely on reading, a bit like a wearable Kindle. (Having put on the pair that Stephens brought to the event, I immediately wanted one of my own, though there’s a 15,000-person waitlist right now, says Stephens.)

    We spent the first half of our chat talking primarily about Founders Fund, kicking off the conversation by talking about how Founders Fund differentiates itself from other firms (board seats are rare, it doesn’t reserve money for follow-on investments, consensus is largely a no-no).

    We also talked about a former colleague who manages to get a lot of press (Stephens rightly ribbed me for talking about him during our own conversation), whether Founders Fund has concerns that Elon Musk is stretching himself too thin (it has stakes in numerous Musk companies), and what happens to another portfolio company, OpenAI, if it loses too much talent, now that it has let its employees sell some percentage of their shares at an $86 billion valuation.

    The second half of our conversation centered on Anduril, and here’s where Stephens really lit up. It’s not surprising. Stephens lives in Costa Mesa, Ca., and spends much of each day overseeing large swaths of the outfit’s operations. Anduril is also very much on the rise right now for obvious reasons.

    If you’d rather watch the talk, you can catch it below. For those of you who prefer reading, what follows is much of that conversation, edited lightly for length.

    Keith Rabois, who recently re-joined Khosla Ventures, was reported to have been “pushed out” of Founders Fund after a falling out with colleagues. Can you talk a bit about what happened?

    At Founders Fund, everyone has their own style. And one of the benefits that really comes down from Peter from the beginning, when we were first founded around 20 years ago, is that everyone should run their own strategy. I do strategy in a different way than [colleague] Brian [Singerman] does venture. It’s different than the way that Napoleon [Ta] — who runs our growth fund — does venture, and that’s good, because we get different looks that we wouldn’t otherwise get by having people executing these different strategies. Keith had a very different strategy. He had a very specific strategy that was very hands-on, very engaged, and I think Khosla is a very good fit for that. . .and I’m really happy that he found a place where he feels like he has a team that can back him up in that execution.

    Image Credits: TechCrunch

    You’ve talked in the past about Founders Fund not wanting to back founders who need a lot of hand holding . . .

    The ideal case for a VC is you have a founder who is going to really good at running their own business, and there’s some unique edge that you can provide to help them. The reality is that that’s usually not the case. Usually the investors who think they’re the most value added are the most annoying and difficult to deal with. The more a VC says ‘I’m going to add value,’ the more you should hear them say, ‘I’m going to annoy the ever-living crap out of you for the rest of the time that I’m on the cap table.’ If we believe that we — Founders Fund — are necessary to make the business work — we should be investing in ourselves, not the founders.

    I find it interesting that so much ink was spilled when Keith moved to Miami, and again when he moved back to the Bay Area in a part-time capacity. People thought Founders Fund had moved to Florida, but you’ve told me the bulk of the firm remains in the Bay Area.

    The vast majority of the team is still in San Francisco. . . Even when I joined Founders Fund 10 years ago, it was really a Bay Area game. Silicon Valley was still the dominant force. I think if you look at fund five, which is the one I entered at Founders Fund, something like 60% to 70% of our investments were Bay Area companies. If you look at fund seven, which is the last vintage, the majority of the companies were not in the Bay Area. So whatever people thought about Founders Fund relocating to Miami, that was never the case. The idea was that if things are geographically distributed, we should have people who are closer to the other things that are interesting.

    Keith said something earlier today at the [nearby] Upfront Summit about founders in the Bay Area being comparatively lazy and not willing to work nine to nine on weekdays or on Saturdays. What do you think about that and also, do you think founders should be working those hours?

    I used to work for the government, where, when you speak publicly, the goal is to say as many words as possible without saying anything . . .it’s just like the teacher from Charlie Brown, rah, rah, rah, rah, rah. Keith is really good at saying things that journalists ask about later. That’s actually good for Keith. He made us talk about him here on stage. He wins. I think the reality is that there aren’t enough people in the world that say things that people remember that are worth talking about later. My goal for the rest of this talk is to find something to say that someone will ask about later today or tomorrow, ‘Can you believe Trae said that?’

    I have a solution to that, but that comes later! OpenAI is a portfolio company; you bought secondary shares. It just oversaw another secondary sale. Its employees have made a lot of money (presumably) from these sales. Does that concern you? Do you have a stance on when is too soon for employees to start selling shares to investors?

     

    Image Credits:

     

    In tech, the competition for talent is really fierce, and companies want their employees to believe that their equity has real monetary value. Obviously it would be bad if you said, ‘You can sell 100% of your vested equity,’ but at a fairly early stage, I think it’s fine to say, ‘You’ve got 100,000 shares vested; maybe you can sell 5% to 10% of that in a company-facilitated tender, so that when you’re being compensated with equity, that’s real and that’s part of your total comp package.’

    But the scale is so different. This is a company with an $86 billion valuation [per these secondary buyers], so 5% to 10% is a lot.

    I think if you start seeing a performance degradation related to people checking out because they have too much liquidity, then yeah, that becomes a pretty serious problem. I haven’t seen that happen at OpenAI. I feel like they are super mission-motivated to get to [artificial general intelligence], and that’s a really meaty mission.

    You’re also an investor in SpaceX. You’re an investor in Neuralink. Are you also an investor in Boring Company?

    We’re an investor in Boring Company.

    Are you an investor in X?

    No. No, no, no, no. [Laughs.]

    But you’re in the business of Elon Musk, as I guess anyone who’s an investor would want to be. Are you worried about him? Are you worried about a breaking point?

    I’m not personally concerned. Elon is one of the most unique and generational talents that I think I’ll see for the rest of my life. There are always trade-offs. You go above a certain IQ point and the trade-offs become quite severe, and Elon has a set of trade-offs. He’s incredibly intense. He will outwork anyone. He’s brilliant. He’s able to organize a lot of stuff in his brain. And there are going to be other parts of life that suffer.

    You are very involved in the day-to-day of Anduril, more than I realized. You’ve built these autonomous vessels and aircraft. You recently introduced the RoadRunner, a VTOL that can handle varying payloads. Can you give us a curtain raiser about what else you’re working on?

    The nature of Anduril and what we’re doing there is that the threat that we’re facing globally is very different than it was in 2000 through 2020, when we were talking about non-state actors: terrorist organizations, insurgent groups, rogue states, things like that. It looks now more like a Cold War conflict against near-peer adversaries. And the way we engaged with great power conflict during the Cold War was by building these really expensive, exquisite systems: nuclear deterrents, aircraft carriers, multi-hundred-million-dollar aircraft missile systems. [But] we find ourselves in these conflicts where our adversaries are showing up with these low-cost attritable systems: things like a $100,000 Iranian Shahed kamikaze drone or a $750,000 Turkish TB2 Bayraktar or simple rockets and DJI drones with grenades attached to them with little gripper claws.

    Our response to that has been historically to shoot a $2.25 million Patriot missile at it, because that’s what we have, that’s what’s in our inventory. But this isn’t a scalable solution for the future. So since we were founded, Anduril has looked at: how can we reduce the cost of engagement, while also removing the human operator, removing them from the threat of loss of life . . .And these capabilities are not hardware capabilities largely; this is about autonomy, which is a software problem . . .so we wanted to build a company that’s software-defined and hardware-enabled, so we’re bringing these systems that are low cost and supplementing the existing capabilities to create a continued deterrent impact so that we avoid global conflict . . .You want to do things in attritable ways that reduce the cost of life and the capital costs of deploying these systems, [yet] that still allow you to demonstrate total technological superiority on the battlefield to the extent that you prevent conflict from ever happening.

    I’d read a story recently where someone from one of the defense ‘primes,’ as they’re called, rolled their eyes and said defense tech upstarts don’t know enough yet about mass production. Is that a concern for you? 

    Startups don’t know how to do mass production. But primes also don’t know how to do mass production. You can look at the Boeing 737 problem if you want some evidence of that. We have no supply of Stingers, Javelins HIMARS, GMLRS, Patriot missiles — they can’t make them fast enough. And the reason is they built these supply chains and manufacturing facilities that are more like the manufacturing facilities of the Cold War.

    To look at an analogy to this, when Tesla went out to build at massive scale, they said, ‘We need to build an autonomous factory from the ground up to actually hit the demand requirements for producing at a low cost and at the scale that we need to grow.’ And GM looked at that and they said, ‘That’s ridiculous. This company will never scale.’ And then five years later, it was evident that they were just getting absolutely smoked. So I think the primes are saying this because it’s the defensive reaction that they would have. to say these upstarts will never get it.

    Anduril is trying to build a Tesla. We’re going to build a modular, autonomous factory that’s going to be able to keep up with the demand that the customer is throwing at us. It’s a big bet, but we hired the guy that did it at Tesla. His name is Keith Flynn. He’s now our Head of Production.

     

     

    I’m sure you get asked a lot about the danger of autonomous systems. Sam Altman, at one of these events, told me years ago that it was among his biggest fears when it comes to AI. How you think about that?

    Throughout the course of human history, we’ve gotten more and more violent. We started with, like, punching each other and then hitting each other with rocks and then eventually we figured out metals and we started making swords and bow and arrows and spears, and then catapults and then eventually we got to the advent of gunpowder. And then we started dropping bombs on each other, and then in the 1940s, we reached the point where we realized we had humanity-destroying capability in nuclear weapons. Then everyone kind of stopped. And we stood around and we said, ‘It would not be good to use nuclear weapons. We can all kind of agree we don’t actually want to do this.’

    If you look at the curve of that violent potential, it started coming down during the Cold War, where you had precision-guided munitions. If you need to take out a target, [the question became] can you shoot a missile through a window and only take out the target that you’re intending to take out? We got much more serious about intelligence operations so we could be more precise and more discriminating in the attacks that we delivered. I think autonomous systems are the far reach of that. It’s saying, ‘We want to prevent the loss of human life. What can we do to eliminate that, to the extent possible to be absolutely sure that when we take lethal action, we’re doing it in the most responsible way possible’ . . .

    Am I scared of Terminator? Sure, there’s some potential hypothetical future where the AGI becomes sentient and decides that we will be better off making paper clips. We’re not close to that right now. No one in the DoD or any of our allies and partners is talking about sentient AGI taking over the world and that being the goal of the DoD. But in 2016, Vladimir Putin, in a speech to the Technical University of Moscow, said ‘He who controls AI controls the world,’ and so I think we have to be very serious about recognizing that our adversaries are doing this. They’re going to be building into this future. And their goal is to beat us to that. And if they beat us to it, I’d be much more concerned about that Terminator reality than if we, in a democratic Western society, we’re the ones that control the edge.

    Speaking of Putin, what is Anduril doing in Ukraine?

    We’re deployed all over the world in conflict zones including Ukraine. You go into a conflict with the technology you already have, not with the technology you hope to have in the future. So much of the technology that the United States, the UK, and Germany sent over to Ukraine were Cold War era technologies. We were sending them things that were sitting in warehouses that we needed to get out of our inventory as quickly as possible. Anduril’s goal, aside from supporting those conflicts, is to build the capabilities that we need to build, to ensure that the next time there’s a conflict, we have a big inventory of stuff that we can deploy very quickly to support our allies.

    You’re privy to conversations that we probably can’t imagine. What is in your survival kit? And is it in a bunker?

    I do have a bunker, I can confirm. What’s in my survival kit? I don’t think I have any interesting ideas here. It’s like, you want non perishables. You want a big supply of water. It might not hurt to have some shotguns. I don’t know. Find your own bunker. It turns out you can buy Cold War era missile silos that make for great bunkers and there’s one for sale right now in Kansas. I would encourage any of you [in the audience] that are interested to check it out.

    You’re obviously very passionate about this country. You worked in government service. You work with Peter Thiel, who has thrown his resources behind people who’ve been elected to public office, including now, Ohio Senator J.D. Vance. Will we ever see you run for office?

    I’m not personally opposed to the idea, but my wife — who I love very much — said she would divorce me if I ever ran for public office. So the answer is the strong no.

     

     

    [ad_2]

    Connie Loizos

    Source link