ReportWire

Tag: Artificial Intelligence

  • FORO Founder Brett Boston: ‘Time to Talk About How Good A.I. is Getting for Meeting IIJA Challenges’

    FORO Founder Brett Boston: ‘Time to Talk About How Good A.I. is Getting for Meeting IIJA Challenges’

    [ad_1]

    Press Release


    Oct 13, 2022

    Artificial Intelligence is a proven technology for identifying hidden data relationships and patterns in massive amounts of data. While its use in banking, retail and healthcare has been widespread it is underutilized for transportation decision making. FORO is proving it’s time to start taking its potential seriously for meeting transportation challenges.

    FORO pioneered AI solutions in Indiana over three years ago, when they built customized algorithms to process INDOT asset management data. Using a real-time process with INDOT stakeholders, FORO developed an AI engine to process and refine project decisions and financials. This approach identified an additional $106 million in savings for project bundling and reduced the time required to select them from months to minutes.

    Since then, FORO has expanded its AI process into:

    • Automating estimate and bid process analysis
    • Prioritizing and sequencing IIJA projects
    • Projecting IIJA impacts on contract letting and pricing
    • Calculating and identifying equity across projects

    FORO and AI for DOTs has been growing ever since. Why? The answer is simple. Visionary DOT clients need new ways to maximize their mission given their limited resources and staff. Especially now with the influx of new funds from the Federal Infrastructure Investment & Job Acts.

    Applying AI to specific agency challenges is where FORO’s tailored approach for has blossomed. It is FORO and DOT staffs working together to build custom AI engines that actually work to save staff time, money, and to make better strategic decisions.

    “Technology is part of the solution, but it isn’t the whole answer,” says Vern Herr, Chief AI Evangelist of FORO. “Our process is human-centric. People aren’t going away. They’re just going to be able to move faster, make fewer mistakes and understand what data is really useful for making decisions. AI helps people and processes get smarter quickly.”

    FORO’s AI philosophy is prudent: Take a one-step-at-a-time approach. They work hand-in-hand with their DOT clients to identify the places where AI can add value, gather relevant client data, and define vital business rules.

    FORO founder Brett Boston says, “Working with DOT partners we are creating the next generation of AI applications for DOTs across the country. Now with the huge influx of IIJA funding, our DOT expertise has never been more relevant or in demand. Since staffs are not increasing and there are no in-house data AI gurus, FORO helps DOTs make the most of what they have, fulfill their mission and maximize stewardship/the value of taxpayer dollars.

    “The American infrastructure needs work. Our roads and bridges are outdated in many areas and it’s the roads that keep the economy rolling. And it is the job and responsibility of each DOT to get that done. And we love to help these dedicated professionals maintain the best transportation system on the planet running as efficiently as possible.

    “Their challenges are many and FORO AI is a small part in meeting those challenges. Our customers have always been our best advertising. No moon-shot promises, just grounded and demonstrable results using our FORO process and AI.”

    Every AI journey starts with a first step, and that step for transportation is getting better every day with FORO.

    About FORO

    Founded in 2017, FORO is a facilitator in the improvement of organization, collaboration, and decision-making using artificial intelligence. FORO is based in Atlanta, GA with offices in St. Louis, MO.

    www.teamforo.com

    Source: FORO

    [ad_2]

    Source link

  • Microsoft unveils $4,299 Surface desktop computer | CNN Business

    Microsoft unveils $4,299 Surface desktop computer | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft’s most expensive Surface device is about to get even pricier.

    At a press event on Wednesday, Microsoft is set to unveil several Surface Pro tablets, Surface Laptop models and a Surface Studio 2+ desktop computer, the last of which has not been updated in several years.

    The new 28-inch Surface Studio 2+, an all-in-one desktop, now has an Intel Core H-35 processor, 50% faster CPU performance and an updated NVIDIA chip for faster graphics. The device also includes an updated display, cameras, microphones and supports a digital pen for on-screen drawing. It also has several ports, including USB with Thunderbolt 4, and the display can split into four different apps at once for greater multitasking.

    The Surface Studio 2+ starts at $4,299, and $4,499 with the digital pen. The previous Surface Studio 2, released in 2018, received some criticism for its $3,499 starting price. Microsoft told CNN Business this year’s price jump is attributed to several significant improvements, including the new processor, a 1 TB SSD hard drive for faster file transfers and an enhanced 1080p camera, among other features.

    The announcements about the refreshed Surface product lineup will kick off Microsoft’s days-long Ignite developer conference on Wednesday. The event comes as Microsoft marks the tenth anniversary of the Surface line, which originally launched with a tablet to take on the iPad.

    Like other tech companies that have unveiled new products this fall, Microsoft is also confronting a more difficult economic environment, including high inflation and fears of a looming recession, that could make it harder to convince customers to spend three or even four figures upgrading devices.

    While the new Surface products aren’t much different in terms of design or screen size than previous iterations, the latest devices feature some upgrades, including new chipsets for better performance.

    Microsoft showed off its flagship Surface Pro 9 tablet, once again aimed at replacing the laptop. The two-in-one device features an aluminum casing in new colors as well as a built-in kickstand and a PixelSense display. Underneath the display is an HD camera, updated speakers and microphones, and a custom G6 chip. Microsoft said the chip helps power apps with digital ink, such as Ink Focus in Microsoft OneNote and the GoodNotes app for Windows 11, which is designed to make it feel like the user is writing with a pen and paper.

    The Surface Pro 9 also offers a choice between processors. The first option is a 12th Gen Intel Core processor built on the Intel Evo platform 4 with Thunderbolt 4 – a combination which promises 50% more performance, better multitasking and desktop productivity, faster data transfer, and the ability to dock to multiple 4K displays. The second option is a Microsoft SQ3 processor powered by Qualcomm Snapdragon with 5G connectivity, with up to 19 hours of battery and new AI features.

    The Surface Pro 9 is available in four colors, including platinum, graphite, sapphire and forest. It starts at $999.

    Microsoft also introduced an update to its ultra-portable laptop, Surface Laptop 5, which looks very similar to its predecessor but with a processor update that may attempt to bring it closer in competition with Apple’s ARM-based chipsets for macOS laptops.

    Surface Laptop 5 runs on Intel Evo platform and comes in two display sizes: 13.5 inches and 15 inches. It comes with updated Dolby Atmos 3D spatial speakers, a front-facing HD camera that automatically adjusts camera exposure in any lighting, and several new aluminum colors, such as cool metal, sage and alcantara. The company also said it promises one day of battery life on a single charge and is 50% more powerful than its predecessor.

    The Surface Laptop starts at $999 for the 13.5-inch version and $1299 for the 15 inch. Pre-orders begin for Surface products on Wednesday in select markets and start hitting shelves later this month.

    Microsoft hardware devices amount to between 3% to 5% of the tablet market, according to David McQueen, an analyst at ABI Research. Instead, the bulk of its revenue comes from Microsoft OS across different device types and associated applications and cloud services.

    “Microsoft is able to stay in the hardware sector because of revenue generated from these services,” McQueen said. It’s an approach similar to Google whose Pixel smartphone remains a niche product but serves as a way for the company to highlight its apps and OS.

    On Wednesday, the company also announced a new Microsoft Designer app and Image Creator in Bing and the Edge browser to bring advanced graphic design to mainstream audiences. The platform relies heavily on a partnership with startup OpenAI and its AI-powered DALL-2 tool, which generates custom images using text prompts. DALL-2 is also coming to Microsoft’s Azure OpenAI Service.

    Brands are increasingly using DALL-2 for both ads and product inspiration, according to Microsoft. In a blog post, the company detailed how toy company Mattel sought out Dall-E 2 to conceptualize how future cars may look, such as by changing colors and typing “make it a convertible,” among other commands.

    Experts in the AI field have raised concerns that the open-ended nature of these systems — which makes them adept at generating all kinds of images from words — and their ability to automate image-making means they could automate bias on a massive scale. In previous test of OpenAI’s system, for example, typing in “CEO” showed images that all appeared to be men and nearly all of them were white.

    Microsoft said it is taking the concerns seriously. Inappropriate text requests will be denied by Microsoft’s servers, according to the company, and users will ultimately be banned for repeat offenses.

    [ad_2]

    Source link

  • Mobile Network Data, an Efficient Method for Assessing the Spread of Epidemics

    Mobile Network Data, an Efficient Method for Assessing the Spread of Epidemics

    [ad_1]

    Newswise — The onset of the COVID-19 pandemic in March 2020 forced governments around the world to take measures to prevent its spread among the population and, thus, reduce the number of fatalities as a result of the virus. A few months later, as mobility restrictions and confinements were gradually lifted, states decided to launch tracking apps that citizens could download to their cell phones to find out if nearby contacts were infected with COVID. However, for these apps to be truly effective they require a large number of people to have them installed on their devices, and they also involve certain privacy risks.

    Now an IMDEA Networks research team led by Elisa Cabana (Postdoc researcher) and Nikolaos Laoutaris (Research Professor), in collaboration with Andra Lutu (Teléfonica Research) and Enrique Frías-Martínez (Camilo José Cela University), has carried out a study in which they propose a method that uses mobile network data to detect possible hospitalizations due to COVID-19 and obtain the corresponding epidemic risk maps. The paper “Improving epidemic risk maps using mobility information from mobile network data” will be published at the ACM SIGSPATIAL conference in November 2022.

    Cabana explains that the main advantage of the proposed solution is that, unlike Contact Tracing, “the data is already available at the operator and progress is faster. You don’t need to have GPS enabled and an application downloaded.” “When you have mobile data connected, your device connects to a cell tower that identifies your location radius. And that’s how you study the spatio-temporal mobility of people,” she adds. Another plus point is that the method works with anonymized data and can be run on the operator’s premises under its standard security provisions.

    According to Laoutaris, the method works as follows: “We check the location of a phone late at night and if it is not connected to the usual phone towers it was connected to in the pre-pandemic era, we see if it was connected to a tower near a hospital that is receiving COVID patients. If it does, the person who owns the cell phone is labeled as potentially hospitalized. The method also includes filters to eliminate false positives, such as people who live near or work in hospitals.

    As indicated in their study, mobile network data can be exploited to understand the dynamics of urban mobility and its impact on the spread of contagious diseases such as cholera, and also to predict the risk of viruses such as dengue, Zika or malaria, or other new ones that may emerge in the future.

    The team has applied their methods to an anonymized dataset of more than 2 million cell phones, collected by a mobile network provider located in London, UK, during the months of March and April 2020. They have concluded that this method yields a 98.6% agreement with public records of patients admitted to National Health Service (NHS) hospitals.

    Phases of the data collection process

    In the first phase, the research group describes the algorithm for detecting possible COVID hospitalizations from the mobile network data, as well as the parameters involved. The second phase consists of validating these data by checking the cases reported by London hospitals to the National Health Service and comparing them with those obtained with the proposed method. Finally, in the third phase, they analyze the mobility pattern of each person detected as hospitalized during the two weeks prior to their hospitalization day. With this information, they obtain dynamic and detailed risk maps that change over time and thus more accurately capture the distribution, evolution and intensity of the disease.

    Compared to census-based maps, their risk maps indicate that the areas at highest risk are not necessarily the most densely populated and can change from day to day. In addition, they have observed that hospitalized people tend to have a higher average mobility than non-hospitalized people.

    Elisa Cabana stresses that the most relevant result of her research is precisely the risk maps, since they not only allow the evolution of an epidemic to be visually analyzed, but can also be very beneficial for different sectors of society. “At the individual level, representing each area with a more or less intense color, which can vary over time, depending on a risk measure, is useful because it can help people to take additional protective measures, at each time and place. For emergency teams and decision-makers, it would help to assess the level of stress in the health system, as well as the severity and intensity of spread, and the advantages or disadvantages of certain decisions (use of masks, quarantine, vaccination). In general, the spatio-temporal information extracted from mobile network data, and the tools we develop with that information, can benefit both individuals and the policies and important decisions being developed against existing and future epidemics,” she concludes.

    E. Cabana, A. Lutu, E. Frias-Martinez, N. Laoutaris, “Improving Epidemic Risk Maps Using Mobility Information from Mobile Network Data,” ACM SIGSPATIAL’22.(extended abstractfull versionat SpatialEpi’22 workshop).

    [ad_2]

    IMDEA Networks Institute

    Source link

  • The White House released an ‘AI Bill of Rights’ | CNN Business

    The White House released an ‘AI Bill of Rights’ | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Tuesday released a set of guidelines it hopes will spur companies to make and deploy artificial intelligence more responsibly and limit AI-based surveillance, despite the fact that there are few US laws compelling them to do so.

    The guidelines, which have been in the works for a year, are not binding in any way. But the White House hopes it will convince tech companies to take additional steps to protect consumers, including clearly explaining how and why an automated system is in use and designing AI systems to be equitable. The blueprint joins a number of other voluntary efforts to adopt rules regarding transparency and ethics in AI, which have come from government agencies, companies and non-government groups.

    Though the use of AI has proliferated in recent years — being used for everything from confirming people’s identities for unemployment benefits to generating a highly realistic picture in response to a written prompt — the US legislative landscape has not kept pace. There are no federal laws specifically regulating AI or applications of AI, such as facial-recognition software, which has been criticized by privacy and digital rights groups for years over privacy issues and leading to the wrongful arrests, of at least several Black men, among other issues.

    A handful of individual states have their own rules. Illinois, for instance, has a law known as the Biometric Information Privacy Act (BIPA), which forces companies to get permission from people before collecting biometric data like fingerprints or scans of facial geometry. It also allows Illinois residents to sue companies for alleged violations of the law. Since 2019, a number of communities and some states have also banned the use of facial-recognition software in various ways, though a few have since pulled back on such rules.

    The Blueprint for an AI Bill of Rights includes five principles: That people should be protected from systems deemed “unsafe or ineffective;” that people shouldn’t be discriminated against via algorithms and that AI-driven systems should be made and used “in an equitable way;” that people should be kept safe “from abusive data practices” by safeguards built in to AI systems and have control over how data about them is used; that people should be aware when an automated system is in use and be aware of how it could affect them; and that people should be able to opt out of such systems “where appropriate” and get help from a person instead of a computer.

    “Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies,” said Alondra Nelson, the deputy director of the White House Office of Science and Technology Policy, during a press briefing.

    While some privacy and technology advocates responded positively to the guidelines, they also pointed out that they are just that, guidelines — and not legally binding.

    In a statement, Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, a Washington, DC-based nonprofit, said, “Today’s agency actions are valuable, but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law.”

    In a separate statement, ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program, called the principles “an important step in addressing the harms of AI” and added that “there should be no loopholes or carve-outs for these protections.”

    “It’s critical that the Biden administration use all levers available to make the promises of the Bill of Rights blueprint a reality,” Moore said.

    [ad_2]

    Source link

  • Anyone can now use powerful AI tools to make images. What could possibly go wrong? | CNN Business

    Anyone can now use powerful AI tools to make images. What could possibly go wrong? | CNN Business

    [ad_1]



    CNN Business
     — 

    If you’ve ever wanted to use artificial intelligence to quickly design a hybrid between a duck and a corgi, now is your time to shine.

    On Wednesday, OpenAI announced that anyone can now use the most recent version of its AI-powered DALL-E tool to generate a seemingly limitless range of images just by typing in a few words, months after the startup began gradually rolling it out to users.

    The move will likely expand the reach of a new crop of AI-powered tools that have already attracted a wide audience and challenged our fundamental ideas of art and creativity. But it could also add to concerns about how such systems could be misused when widely available.

    “Learning from real-world use has allowed us to improve our safety systems, making wider availability possible today,” OpenAI said in a blog post. The company said it has also strengthened the ways it rebuffs users attempts to make its AI create “sexual, violent and other content.”

    There are now three well-known, immensely powerful AI systems open to the public that can take in a few words and spit out an image. In addition to DALL-E 2, there’s Midjourney, which became publicly available in July, and Stable Diffusion, which was released to the public in August by Stability AI. All three offer some free credits to users who want to get a feel for making images with AI online; generally, after that, you have to pay.

    These so-called generative AI systems are already being used for experimental films, magazine covers, and real-estate ads. An image generated with Midjourney recently won an art competition at the Colorado State Fair, and caused an uproar among artists.

    In just months, millions of people have flocked to these AI systems. More than 2.7 million people belong to Midjourney’s Discord server, where users can submit prompts. OpenAI said in its Wednesday blog post that it has more than 1.5 million active users, who have collectively been making more than 2 million images with its system each day. (It should be noted that it can take many tries to get an image you’re happy with when you use these tools.)

    Many of the images that have been created by users in recent weeks have been shared online, and the results can be impressive. They range from otherworldly landscapes and a painting of French aristocrats as penguins to a faux vintage photograph of a man walking a tardigrade.

    The ascension of such technology, and the increasingly complicated prompts and resulting images, has impressed even longtime industry insiders. Andrej Karpathy, who stepped down from his post as Tesla’s director of AI in July, said in a recent tweet that after getting invited to try DALL-E 2 he felt “frozen” when first trying to decide what to type in and eventually typed “cat”.

    CNN's Rachel Metz created this half-duck, half-corgie with AI image generator Stable Diffusion.

    “The art of prompts that the community has discovered and increasingly perfected over the last few months for text -> image models is astonishing,” he said.

    But the popularity of this technology comes with potential downsides. Experts in AI have raised concerns that the open-ended nature of these systems — which makes them adept at generating all kinds of images from words — and their ability to automate image-making means they could automate bias on a massive scale. A simple example of this: When I fed the prompt “a banker dressed for a big day at the office” to DALL-E 2 this week, the results were all images of middle-aged white men in suits and ties.

    “They’re basically letting the users find the loopholes in the system by using it,” said Julie Carpenter, a research scientist and fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo.

    The prompt

    These systems also have the potential to be used for nefarious purposes, such as stoking fear or spreading disinformation via images that are altered with AI or entirely fabricated.

    There are some limits for what images users can generate. For example, OpenAI has DALL-E 2 users agree to a content policy that tells them to not try to make, upload, or share pictures “that are not G-rated or that could cause harm.” DALL-E 2 also won’t run prompts that include certain banned words. But manipulating verbiage can get around limits: DALL-E 2 won’t process the prompt “a photo of a duck covered in blood,” but it will return images for the prompt “a photo of a duck covered in a viscous red liquid.” OpenAI itself mentioned this sort of “visual synonym” in its documentation for DALL-E 2.

    Chris Gilliard, a Just Tech Fellow at the Social Science Research Council, thinks the companies behind these image generators are “severely underestimating” the “endless creativity” of people who are looking to do ill with these tools.

    “I feel like this is yet another example of people releasing technology that’s sort of half-baked in terms of figuring out how it’s going to be used to cause chaos and create harm,” he said. “And then hoping that later on maybe there will be some way to address those harms.”

    To sidestep potential issues, some stock-image services are banning AI images altogether. Getty Images confirmed to CNN Business on Wednesday that it will not accept image submissions that were created with generative AI models, and will take down any submissions that used those models. This decision applies to its Getty Images, iStock, and Unsplash image services.

    “There are open questions with respect to the copyright of outputs from these models and there are unaddressed rights issues with respect to the underlying imagery and metadata used to train these models,” the company said in a statement.

    But actually catching and restricting these images could prove to be a challenge.

    [ad_2]

    Source link

  • Tesla robot slowly walks on stage at AI Day | CNN Business

    Tesla robot slowly walks on stage at AI Day | CNN Business

    [ad_1]


    Washington, DC
    CNN
     — 

    Tesla revealed on Friday a prototype of a humanoid robot that it says could be a future product for the automaker.

    The robot, dubbed Optimus by Tesla, walked stiffly on stage at Tesla’s AI Day, slowly waved at the crowed and gestured with its hands for roughly one minute. Tesla CEO Elon Musk said that the robot was operating without a tether for the first time. Robotics developers often use tethers to support robots because they aren’t capable enough to walk without falling and damaging themselves.

    The Optimus’ abilities appear to significantly trail what robots from competitors like Hyundai-owned Boston Dynamics are capable of. Boston Dynamics robots have been seen doing back flips and performing sophisticated dance routines without a tether.

    “The robot can actually do a lot more than we just showed you,” Musk said at the event. “We just didn’t want it to fall on its face.”

    Tesla also showed videos of its robot performing simple tasks like carrying boxes and watering plants with a watering can.

    Musk claimed that if the robot was produced in mass volumes it would “probably” cost less than $20,000. Tesla maintains that Optimus’ advantage over competitors will be its ability to navigate independently using technology developed from Tesla’s driver-assistance system “Full Self Driving,” as well as cost savings from what it has learned about manufacturing from its automotive division. (Tesla’s “Full Self Driving” requires a human that is alert and attentive, ready to take over at any time, as it is not yet capable of fully driving itself.)

    Tesla has a history of aggressive price targets that it doesn’t ultimately reach. The Tesla Model 3 was long promised as a $35,000 vehicle, but could only very briefly be purchased for that price, and not directly on its website. The most affordable Tesla Model 3 now costs $46,990. When Tesla revealed the Cybertruck in 2019, its pick-up truck that remains unavailable for purchase today, it was said to cost $39,990, but the price has since been removed from Tesla’s website.

    Tesla AI Day is intended largely as a recruiting event to attract talented people to join the company.

    Musk claimed the robot could be transformative for civilization. The robot displayed Friday, despite its limitations compared to competitors, was significantly ahead of what Tesla revealed a year ago, when a person jumped on stage in a robot suit and danced around.

    “‘Last year was just a person in a robot suit,” Musk said before the robot walked on stage. “We’ve come a long way. Compared to that, it’s going to be very impressive.”

    Tesla is not the first automaker to develop a humanoid robot. Along with Hyundai’s Boston Dynamics, Honda worked on robots dubbed “Asimo” for nearly 20 years. In its final form, Asimo was a child-size humanoid robot capable of untethered walking, running, climbing and descending stairs, and manipulating objects with its fingers.

    [ad_2]

    Source link

  • Tesla robot walks, waves, but doesn’t show off complex tasks

    Tesla robot walks, waves, but doesn’t show off complex tasks

    [ad_1]

    DETROIT — An early prototype of Tesla Inc.’s proposed Optimus humanoid robot slowly and awkwardly walked onto a stage, turned, and waved to a cheering crowd at the company’s artificial intelligence event Friday.

    But the basic tasks by the robot with exposed wires and electronics — as well as a later, next generation version that had to be carried onstage by three men — was a long way from CEO Elon Musk’s vision of a human-like robot that can change the world.

    Musk told the crowd, many of whom might be hired by Tesla, that the robot can do much more than the audience saw Friday. He said it is also delicate and “we just didn’t want it to fall on its face.”

    Musk suggested that the problem with flashy robot demonstrations is that the robots are “missing a brain” and don’t have the intelligence to navigate themselves, but he gave little evidence Friday that Optimus was any more intelligent than robots developed by other companies and researchers.

    The demo didn’t impress AI researcher Filip Piekniewski, who tweeted it was “next level cringeworthy” and a “complete and utter scam.” He said it would be “good to test falling, as this thing will be falling a lot.”

    “None of this is cutting edge,” tweeted robotics expert Cynthia Yeung. “Hire some PhDs and go to some robotics conferences @Tesla.”

    Yeung also questioned why Tesla opted for its robot to have a human-like hand with five fingers, noting “there’s a reason why” warehouse robots developed by startup firms use pinchers with two or three fingers.

    Musk said that Friday night was the first time the early robot walked onstage without a tether. Tesla’s goal, he said, is to make an “extremely capable” robot in high volumes — possibly millions of them — at a cost that could be less than a car, that he guessed would be less than $20,000.

    Tesla showed a video of the robot, which uses artificial intelligence that Tesla is testing in its “Full Self-Driving” vehicles, carrying boxes and placing a metal bar into what appeared to be a factory machine. But there was no live demonstration of the robot completing the tasks.

    Employees told the crowd in Palo Alto, California, as well as those watching via livestream, that they have been working on Optimus for six to eight months. People can probably buy an Optimus “within three to five years,” Musk said.

    Employees said Optimus robots would have four fingers and a thumb with a tendon-like system so they could have the dexterity of humans.

    The robot is backed by giant artificial intelligence computers that track millions of video frames from “Full Self-Driving” autos. Similar computers would be used to teach tasks to the robots, they said.

    Experts in the robotics field were skeptical that Tesla is anywhere near close to rolling out legions of human-like home robots that can do the “useful things” Musk wants them to do – say, make dinner, mow the lawn, keep watch on an aging grandmother.

    “When you’re trying to develop a robot that is both affordable and useful, a humanoid kind of shape and size is not necessarily the best way,” said Tom Ryden, executive director of the nonprofit startup incubator Mass Robotics.

    Tesla isn’t the first car company to experiment with humanoid robots.

    Honda more than two decades ago unveiled Asimo, which resembled a life-size space suit and was shown in a carefully-orchestrated demonstration to be able to pour liquid into a cup. Hyundai also owns a collection of humanoid and animal-like robots through its 2021 acquisition of robotics firm Boston Dynamics. Ford has partnered with Oregon startup Agility Robotics, which makes robots with two legs and two arms that can walk and lift packages.

    Ryden said carmakers’ research into humanoid robotics can potentially lead to machines that can walk, climb and get over obstacles, but impressive demos of the past haven’t led to an “actual use scenario” that lives up to the hype.

    “There’s a lot of learning that they’re getting from understanding the way humanoids function,” he said. “But in terms of directly having a humanoid as a product, I’m not sure that that’s going to be coming out anytime soon.”

    Critics also said years ago that Musk and Tesla wouldn’t be able to build a profitable new car company that used batteries for power rather than gasoline.

    Tesla is testing “Full Self-Driving” vehicles on public roads, but they have to be monitored by selected owners who must be ready to intervene at all times. The company says it has about 160,000 vehicles equipped with the test software on the road today.

    Critics have said the Teslas, which rely on cameras and powerful computers to drive by themselves, don’t have enough sensors to drive safely. Tesla’s less capable Autopilot driver-assist system, with the same camera sensors, is under investigation by U.S. safety regulators for braking for no reason and repeatedly running into emergency vehicles with flashing lights parked along freeways.

    In 2019, Musk promised a fleet of autonomous robotaxis would be in use by the end of 2020. They are still being tested.

    ————

    O’Brien reported from Providence, Rhode Island.

    [ad_2]

    Source link

  • Tesla’s AI Day is tonight. It may wow you — or end with a gaffe | CNN Business

    Tesla’s AI Day is tonight. It may wow you — or end with a gaffe | CNN Business

    [ad_1]


    Washington, DC
    CNN Business
     — 

    Tesla

    (TSLA)
    will hold its second annual AI Day in Palo Alto, California, Friday evening. The six-hour event will include updates on Tesla

    (TSLA)
    ’s work in artificial intelligence, “Full Self-Driving,” its supercomputer “Dojo” and maybe a humanoid robot, according to invitations posted online by Tesla

    (TSLA)
    supporters. The event is expected to be live-streamed.

    Dojo is a supercomputer being designed to train AI systems to complete complex tasks like Tesla’s driver-assistance systems Autopilot and “Full Self-Driving,” which sometimes perform some driving tasks like steering and keeping up with traffic. Tesla’s previous AI Day included detailed technical explanations of the company’s work in a bid to attract leading engineers.

    Tesla CEO Elon Musk has claimed before that in the long run people will think of Tesla as an AI company, rather than a car company or energy company. He has said that Tesla AI may play a role in computers matching general human abilities, a huge milestone many experts say is decades away and perhaps unattainable. Musk, who has a long history of predictions, has said it may be reached in 2029.

    But more limited and easier to develop forms of artificial intelligence — like identifying emergency vehicles stopped on a highway — have proven to be a significant hurdle for the company as it pursues its dreams of self-driving cars. AI powers “Full Self-Driving,” but the system has faced criticism and backlash as it still requires driver intervention to prevent collisions and Musk’s deadlines for its capabilities slip year after year.

    And this summer Tesla’s director of artificial intelligence, Andrej Karpathy, exited the company, several months after it was announced he was taking a sabbatical.

    It’s not easy to predict what may or may not show up at any event helmed by Musk. Products heralded and talked about sometimes don’t perform as designed — like when Musk showed off the Tesla Cybertruck’s supposedly “unbreakable” windows, that promptly broke — and can’t even be bought years later. (Three years after the event Tesla sells a T-shirt that memorializes the broken window, but it has yet to sell a Cybertruck.)

    Musk has unquestionably disrupted entire industries with his work at Tesla and SpaceX. But he’s also earned a reputation for missing deadlines and overpromising.

    Last year’s AI Day “surprise,” for instance, was a Tesla “robot,” which was just a human dancing in a suit.

    Musk then claimed that the automaker is building a 5-foot-8, 125-pound humanoid robot, called Optimus or Tesla Bot and a prototype would likely be unveiled this year. It’s unclear if a prototype will be revealed Friday, but Musk tweeted Thursday that the event would include “cool hardware demos.”

    Tesla is also working on wheeled robots for manufacturing and autonomous logistics, according to a Tesla job posting for a senior humanoid mechatronics robotics architect.

    Musk claimed last year that the humanoid robot would have a profound impact on the economy. It would begin by working on boring, repetitive and dangerous tasks, he said.

    Tesla and Musk are not, of course, the first to bet on robots. Robots already handle many factory jobs, and companies like Boston Dynamics have worked for years to develop humanoid, animal-like, and other robots for industrial applications.

    Humanoid robots have long fascinated the public and earned a place in pop culture as powerful but sometimes dangerous. Tesla tapped into this when it posted on Instagram in a promotion for the event that, “if you can run faster than 5 mph, you’ll be fine.” The Tesla humanoid robot is planned to have a top speed of 5 mph, the automaker has said.

    But creating a humanoid robot that rivals a human’s abilities has proved incredibly difficult for robotics experts. Artificial intelligence has seen major advances yet trails the general abilities of a human toddler. Most robots in use today are restricted to simple tasks in basic environments like vacuuming a home or moving parts in a factory.

    Tesla would not be the first automaker to build a humanoid robot, either. Honda worked on a series of robots, known as Asimo, for nearly 20 years. The Japanese company shut down development of Asimo in 2018. Korean automaker Hyundai bought Boston Dynamics in 2020.

    Musk said Thursday that AI Day would be “highly technical” as it is meant for recruiting engineers to work on artificial intelligence, robotics and computer chips.

    “Engineers who understand what problems need to be solved will like what they see,” Musk tweeted Friday.

    Tesla did not respond to a request for comment.

    [ad_2]

    Source link

  • Tesla expected to show humanoid robot Optimus demo on Friday night at AI Day 2022

    Tesla expected to show humanoid robot Optimus demo on Friday night at AI Day 2022

    [ad_1]

    Tesla CEO Elon Musk and leaders from the company’s AI and hardware teams are expected to speak at the company’s AI Day 2022, an engineer-recruiting event, which will be live-streamed on Friday starting around 5:00 p.m. in California. You can watch AI Day 2022 here.

    During the last AI Day in August 2021, Musk said Tesla was going to build a humanoid robot, which is referred to as either the Tesla Bot or Optimus today.

    “It’s intended to be friendly, of course, and navigate through a world of humans, and eliminate dangerous, repetitive and boring tasks,” Musk said at the time.

    Tesla didn’t have a hardware prototype to show last year and made the 2021 announcement with an actor dressed in a Tesla Bot body suit dancing on stage. The stunt drew sneers from critics and cheers from fans.

    This year, investors are expecting a real tech demonstration of the robot, along with updates on Tesla’s progress developing self-driving technology that can turn the company’s existing electric vehicles into robotaxis.

    Musk has been promising a truly self-driving Tesla since 2016 when he said a coast-to-coast demo would happen by the end of 2017. To-date the company has only released driver assistance systems that need to be constantly supervised by a human driver who remains attentive to the road and their car, ready to take over at any time.

    When Musk originally floated the humanoid robot concept at AI Day 2021, Musk said of Optimus, “It should be able to, ‘please go to the store and get me the following groceries,’ that kind of thing.”

    Later, Musk said that robots made by Tesla will one day be worth more than its cars, and that thousands of them would be put to work moving parts around the factories, where humans build cars and batteries.

    During Tesla’s 2021 fourth-quarter earnings call, Musk remarked: “If you think about the economy– the foundation of the economy is labor. Capital equipment is distilled labor. So what happens if you don’t actually have a labor shortage? I’m not sure what an economy even means at that point. That’s what Optimus is about, so very important.” 

    Tesla has a mixed record with automation.

    As Bernstein senior research analyst Toni Sacconaghi wrote in a September 30 note ahead of AI Day 2022, In 2018 Tesla “had mistakenly tried to hyper-automate its final assembly (i.e. putting parts into cars).” The result was that Musk soon admitted “excessive automation at Tesla was a mistake,” and “humans are underrated.”

    Tesla brought more people back to its manufacturing and assembly lines after that, but Sacconaghi writes that today Tesla is over-automating its customer service. Tesla owners generally find it difficult to get in touch with individual sales and service reps at Tesla, and are steered to conduct all possible resolution of complaints through Tesla’s mobile app.

    A long-time robotics engineer, Alexander Kernbaum, who now serves as interim director of robotics at the vaunted research and development non-profit SRI International, says whether Tesla impresses with its robotics update at AI Day or not, the company has the resources to develop something meaningful and has inspired new interest in the field.

    However, Kernbaum notes, when it comes to creating a robot that can make a difference in an car assembly plant, there’s really no need for Tesla to develop a bi-pedal robot. “Mobile robots will find uses,” he explains, “But mobility should be as simple as possible for a factory environment meaning wheels would be the way to go, not legs.”

    Robotic legs require a lot of power, for one thing, which would put strain on any battery Tesla develops for its robotics. Additionally, legged robots — like people — can trip and fall. Wheeled robots would not be as likely to tip over. The safety concern should be tantamount in a factory, Kernbaum suggests.

    Kernbaum believes Tesla would be best-served to focus on robotic hands. He said, “Hands are like the ultimate multi-tool. Dexterity and in-hand object manipulation are the grand 10-year challenges that will have an obvious impact on all precision manufacturing and on everything really.”

    AI Day 2022 will be the company’s first major event since former AI leader of Tesla Andrej Karpathy resigned. AI Day precedes Tesla’s third-quarter vehicle production and deliveries report which is expected within days.

    [ad_2]

    Source link

  • The road to future AI is paved with trust

    The road to future AI is paved with trust

    [ad_1]

    Newswise — The place of artificial intelligence, AI, in our everyday life is increasing and many researchers believe that what we have seen so far is only the beginning. However, AI must be trustworthy in all situations. Linköping University is coordinating TAILOR, a EU project that has drawn up a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future.

    “The development of artificial intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now,” says Fredrik Heintz, professor of artificial intelligence at LiU, and coordinator of the TAILOR project.

    TAILOR is one of six research networks set up by the EU to strengthen research capacity and develop the AI of the future. The foundation of trustworthy AI is being laid by TAILOR, by drawing up a framework, guidelines and a specification of the needs of the AI research community. “TAILOR” is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation and reasoning.

    The roadmap now presented by TAILOR is the first step on the way to standardisation, where the idea is that decision-makers and research funding bodies can gain insight into what is required to develop trustworthy AI. Fredrik Heintz believes that it is a good idea to show that many research problems must be solved before this can be achieved. 

    The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation it must be robust and safe. Fredrik Heintz points out that these criteria pose major challenges, in particular the implementation of the ethical principles. 

    “Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years,” says Fredrik Heintz.

    The project will focus on large comprehensive research questions, and will attempt to find standards that all who work with AI can adopt. But Fredrik Heintz is convinced that we can only achieve this if basic research into AI is given priority. 

    “People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” says Fredrik Heintz.

    Many of the legal proposals written within the EU and its member states are written by legal specialists. But Fredrik Heintz believes that they lack expert knowledge within AI, which is a problem. 

    “Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type,” says Fredrik Heintz.

    The complete roadmap is available at: Strategic Research and Innovation Roadmap of trustworthy AI

    [ad_2]

    Linkoping University

    Source link

  • Unreliable neurons improve brain functionalities

    Unreliable neurons improve brain functionalities

    [ad_1]

    Newswise — The brain is composed of millions of billions of neurons which communicate with each other. Each neuron collects its many inputs and transmits a spike to its connecting neurons. The dynamics of such large and highly interconnected neural networks is the basis of all high order brain functionalities.

    In an article published today in the journal Scientific Reports, a group of scientists has experimentally demonstrated that there are frequent periods of silence in which a neuron fails to respond to its inputs. As opposed to elecronic devices, which are fast and reliable, the brain is composed of unreliable neurons. “A logic-gate always gives the same output to the same input, otherwise electronic devices like cellphones and computers, which are composed of many billions of interconnected logic-gates, wouldn’t function well,” said Prof. Ido Kanter, of Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the study. “Comparing the unreliability of the brain to a computer or cellphone: one time your computer answers 1+1=2 and other times 1+1=5, or dialing 7 in your cellphone many times can result in 4 or 9. Silencing periods would appear to be a major disadvantage of the brain, but our latest findings have shown otherwise.”

    Contrary to what one might think, Kanter and team have demonstrated that neuronal silencing periods are not a disadvantage representing biological limitations, but rather an advantage for temporal sequence identification. “Assume you would like to remember a phone number, 0765…,” said Yuval Meir, a co-author of the study. “Neurons which were active when the digit 0 was presented might be silenced when the next digit 7 is presented, for example. Consequently, each digit is trained on a different dynamically created sub-network, and this silencing mechanism enables our brain to identify sequences efficiently.”

    The brain silencing mechanism is a proposed source for a new AI mechanism, and in addition has been demonstrated as the origin for a new type of cryptosystem for handwriting recognition at automated teller machines (ATMs). This cryptosystem allows the user to write his personal identification number (PIN) on an electronic board rather than clicking a PIN into the ATM. The sequence identification developed by Kanter and team, based on neuronal silencing periods, is not only capable of identifying the correct PIN but also the user’s personal handwriting style and the timing in which each digit of the PIN is written on the board. These added features act as safeguards against stolen cards, even if a thief knows the user’s PIN.

    This latest research by Kanter and team shows that it is not always beneficial to improve the unreliablilty of stuttered neurons in the brain, because they have advantages for higher brain functions.

    See video here.

     

     

    [ad_2]

    Bar-Ilan University

    Source link

  • Knight Security Announces the Final EMERGE Event of 2022

    Knight Security Announces the Final EMERGE Event of 2022

    [ad_1]

    Press Release


    Sep 13, 2022

    The world’s leading security and business intelligence manufacturers will showcase emerging innovations in Physical Security, Smart City, and Cybersecurity technologies at EMERGE ’22 San Antonio. The EMERGE ’22 event will take place on Oct. 13 at Topgolf San Antonio, and although free, space is limited. This event will provide businesses, schools and colleges, local government, and law enforcement officials with the tools to manage their infrastructure, mitigate risks, build safer environments, and drive economic growth.

    In addition, Mr. Ernesto Ballesteros, who serves as the Cybersecurity State Coordinator for the State of Texas at the DHS Cybersecurity & Infrastructure Security Agency, will present on Critical Infrastructures’ Cybersecurity. Mr. Ballesteros has over 20 years of experience in cybersecurity in both the private and public sectors. Through his work at CISA, Mr. Ballesteros fosters strategic public and private sector relationships in the Texas Region 6, which includes the San Antonio Area, to develop and maintain secure and resilient infrastructures.

    Mr. Ballesteros will discuss best practices and no-cost cybersecurity resources CISA provides organizations to assist with understanding risks, safeguarding critical assets, and achieving cyber resilience. 

    “Our country has been cyber-threatened for years, but with the recent Russia-Ukraine war, we continue to see an exponential increase in cyber-attacks against Critical Infrastructure Sectors. Among which are Financial Services, Healthcare & Public Health, Energy & Utilities, and Communication & Transportation Systems. Therefore, security professionals must take every step possible to secure them,” said Phil Lake, Knight Security Systems President.

    At EMERGE ’22 San Antonio, 18+ leading security manufacturers will have live demos of emerging security technology, such as artificial intelligence, license plate recognition, video analytics, video as a service, access control as a service, supply change assurance, zero trust security, and health intelligence monitoring, among other security innovations. 

    This year’s EMERGE ’22 San Antonio conference offers a unique opportunity for security professionals from all industries and government entities to come together, learn about security threats and risk mitigation, discuss solutions, and network with peers.

    EMERGE ’22 San Antonio is free, but space is limited. Please register now to save your seat. 

    About Knight Security Systems

    Knight Security Systems has built its reputation over almost four decades as one of Texas’ leading providers of security system solutions. With more than 7,000 systems installed since 1983, Knight has helped thousands of clients reduce internal and external loss, legal liability, and employee liability while increasing productivity, safety, compliance, customer satisfaction, and bottom-line profits.

    Most of these client engagements turn into trusted long-term relationships — through continuous system health monitoring, steadfast support, and a watchful eye toward future client needs.

    Nelson Torres

    (832) 540-3141

    ntorres@knightsecurity.com

    https://knightsecurity.com/emerge-san-antonio

    Media Kit

    ###

    Source: Knight Security Systems

    [ad_2]

    Source link

  • Kronos Fusion Energy Plans to Make the U.S. a World Leader in Fusion Energy Generation

    Kronos Fusion Energy Plans to Make the U.S. a World Leader in Fusion Energy Generation

    [ad_1]

    Press Release


    Mar 15, 2022

    The United States is falling behind in the race for fusion energy. Kronos Fusion Energy has the ambitious goal of creating commercial and defense applications that will make the United States a world leader in fusion energy generation. Decades of research and development and recent technological breakthroughs have brought us to an inflection point in fusion power. Using advances in machine learning, artificial intelligence, and quantum computing, Kronos Fusion Energy will use proprietary algorithms in simulations that will greatly accelerate the design of an optimized fusion energy generator. Find out how Kronos Fusion Energy is contributing to the future of fusion energy here.

    There is great potential for many military applications for fusion energy across all domains: land, air, sea, space, and cyberspace. On land, clean power with a spectacular reduction in logistics requirements will greatly enhance both the readiness and force protection of U.S. military service members. At sea, there is potential to create fusion power generators for submarines and ships that will be faster, safer, and more powerful with reduced operational costs. In air and space, direct fusion drive technology is emerging that will extend ranges and performance of U.S. military aircraft while also dramatically reducing payload and travel time in the exploration of the universe. In cyberspace, compact and reliable power generation greatly enhances the performance of critical cyber warfare systems. 

    From algorithms to simulation to commercialization, Kronos Fusion Energy plans to build viable fusion energy generators for use at military installations and deployed locations by 2036 and seeks opportunities to incorporate fusion energy across all domains of possible warfare. 

    Kronos Fusion Energy Defense Systems plans to get fusion energy out of the laboratory and on any potential battlefields by aggressively synchronizing a unity of efforts. Brig. Gen. (ret.) Paul E. Owen, Founding Partner and CEO of Kronos Fusion Energy Defense Systems, advised in congruence, “KFEDS recognizes the criticality of the commercialization of emerging technologies and are already grabbing the bull by the horns, building a team that incorporates leadership from across the three pillars of academia, government and industry. This unified effort will allow us to deliver clean, limitless fusion energy to the American people.”

    For further information:

    Kronos Fusion Energy

    1122 Colorado St

    Austin, TX 78701

    https://www.kronosfusionenergy.com/

    PR Contact – Erin Pendleton – e.pendleton@kronosfusionenergy.com

    Source: Kronos Fusion Energy

    [ad_2]

    Source link

  • Rebel Space Technologies Awarded NASA Cognitive Communication Contract

    Rebel Space Technologies Awarded NASA Cognitive Communication Contract

    [ad_1]

    Press Release



    updated: Sep 21, 2021

    Rebel Space Technologies has been awarded a NASA Phase II Small Business Innovation Research (SBIR) contract for their proposal SpaceWeaver: A Collaborative Smart Network for Space Communications. The proposal leverages Rebel Space’s proprietary Rebel Cognitive Communications & Control software (Rebel-C3) for autonomous network management, in addition to their partner Prewitt Ridge’s VerifAI capability.

    SpaceWeaver utilizes Rebel-C3 to create a distributed cognitive space communications network for lunar operations that increases mission science data return and improves network resource efficiencies for NASA missions. Rebel-C3 uses artificial intelligence enhanced distributed sensing and optimized data routing to ensure efficient, resilient operations in an unpredictable space environment. The ultimate goal is to coordinate the transfer and relay of mission data across the lunar architecture based on data priority, content, schedule, and environmental conditions, a necessity for future lunar missions.

    The SBIR Phase I and Phase II efforts and resulting prototype software will contribute to future NASA missions and lunar operations. Rebel Space co-founder, Carrie Hernandez, stated “the continued research and development under the NASA Phase II contract aligns with our company’s commercial goal of providing autonomous network management software to deliver a 10x network performance improvement in challenging industrial environments. We believe that our Rebel-C3 software can help businesses significantly optimize their operations by providing customized network management that takes advantage of 5G capabilities and simplifies the adoption of Industry 4.0 technology.”

    Rebel Space is partnering with Prewitt Ridge’s patent-pending method for processing unstructured and semi-structured textual data into technical decisions, known as VerifAI. “We’re ecstatic to adapt VerifAI from supporting engineering design and into a cognitive component for Rebel Space’s vision of dynamic on-orbit communications routing via SpaceWeaver,” said Steven Massey, CEO & Co-Founder of Prewitt Ridge.

    The NASA Phase II contract has a two year period of performance, during that time, Rebel Space and Prewitt Ridge will deliver to NASA prototype software that will advance NASA’s lunar communications capabilities. 

    ABOUT REBEL SPACE TECHNOLOGIES 

    Founded in 2019 by Carrie and Gabriel Hernandez, a mother and son team, Rebel Space Technologies expands the boundaries of existing communications systems by developing autonomous network management software. Backed by Acequia Capital and world-tier angel investors, Rebel Space has strong early engagement across leading commercial space, defense, and logistics companies.

    (https://www.rebelspacetech.com)

    Press Contact: Carrie Hernandez, CEO, carrie@rebelspacetech.com

    ABOUT PREWITT RIDGE

    Prewitt Ridge was founded by former SpaceX engineers in 2019 to solve one of the most difficult meta-problems in deep tech: making engineering collaboration less prone to error, and less reliant on email & PowerPoint. Backed by leading early-stage funds Wonder Ventures, Haystack, and Acequia Capital, Prewitt Ridge’s technologies have found early traction across a variety of private aerospace companies and government agencies.

    (https://www.prewittridge.com)

    Press Contact: Steven Massey, CEO, media@prewittridge.com

    Source: Rebel Space Technologies

    [ad_2]

    Source link

  • How your phone learned to see in the dark | CNN Business

    How your phone learned to see in the dark | CNN Business

    [ad_1]


    New York
    CNN
     — 

    Open up Instagram at any given moment and it probably won’t take long to find crisp pictures of the night sky, a skyline after dark or a dimly lit restaurant. While shots like these used to require advanced cameras, they’re now often possible from the phone you already carry around in your pocket.

    Tech companies such as Apple, Samsung and Google are investing resources to improve their night photography options at a time when camera features have increasingly become a key selling point for smartphones that otherwise largely all look and feel the same from one year to the next.

    Earlier this month, Google brought a faster version of its Night Sight mode, which uses AI algorithms to lighten or brighten images in dark environments, to more of its Pixel models. Apple’s Night mode, which is available on models as far back as the iPhone 11, was touted as a premier feature on its iPhone 14 lineup last year thanks to its improved camera system.

    These tools have come a long way in just the past few years, thanks to significant advancements in artificial intelligence technology as well as image processing that has become sharper, quicker, and more resilient to challenging photography situations. And smartphone makers aren’t done yet.

    “People increasingly rely on their smartphones to take photos, record videos, and create content,” said Lian Jye Su, an artificial intelligence analyst at ABI Research. “[This] will only fuel the smartphone companies to up their games in AI-enhanced image and video processing.”

    While there has been much focus lately on Silicon Valley’s renewed AI arms race over chatbots, the push to develop more sophisticated AI tools could also help further improve night photography and bring our smartphones closer to being able to see in the dark.

    Samsung’s Night mode feature, which is available on various Galaxy models but optimized for its premium S23 Ultra smartphone, promises to do what would have seemed unthinkable just five to 10 years ago: enable phones to take clearer pictures with little light.

    The feature is designed to minimize what’s called “noise,” a term in photography that typically refers to poor lighting conditions, long exposure times, and other elements that can take away from the quality of an image.

    The secret to reducing noise, according to the company, is a combination of the S23 Ultra’s adaptive 200M pixel sensor. After the shutter button is pressed, Samsung uses advanced multi-frame processing to combine multiple images into a single picture and AI to automatically adjust the photo as necessary.

    “When a user takes a photo in low or dark lighting conditions, the processor helps remove noise through multi-frame processing,” said Joshua Cho, executive vice president of Samsung’s Visual Solution Team. “Instantaneously, the Galaxy S23 Ultra detects the detail that should be kept, and the noise that should be removed.”

    For Samsung and other tech companies, AI algorithms are crucial to delivering photos taken in the dark. “The AI training process is based on a large number of images tuned and annotated by experts, and AI learns the parameters to adjust for every photo taken in low-light situations,” Su explained.

    For example, algorithms identify the right level of exposure, determine the correct color pallet and gradient under certain lighting conditions, sharpen blurred faces or objects artificially, and then makes those changes. The final result, however, can look quite different from what the person taking the picture saw in real time, in what some might argue is a technical sleight-of-hand trick.

    Lights illuminate the Atlanta Botanical Gardens, in this photo taken using Google Pixel 5 Night Sight setting.

    Google is also focused on reducing noise in photography. Its AI-powered Night Sight feature captures a burst of longer-exposure frames. It then uses something called HDR+ Bracketing, which creates several photos with different settings. After a picture is taken, the images are combined together to create “sharper photos” even in dark environments “that are still incredibly bright and detailed,” said Alex Schiffhauer, a group product manager at Google.

    While effective, there can be a slight but noticeable delay before the image is ready. But Schiffhauer said Google intends to speed up this process more on future Pixel iterations. “We’d love a world in which customers can get the quality of Night Sight without needing to hold still for a few seconds,” Schiffhauer said.

    Google also has an astrophotography feature which allows people to take shots of the night sky without needing to tweak the exposure or other settings. The algorithms detect details in the sky and enhances them to stand out, according to the company.

    Apple has long been rumored to be working on an astrophotography feature, but some iPhone 14 Pro Max users have successfully been able to capture pictures of the sky through its existing Night Mode tool. When a device detects a low-light environment, Night mode turns on to capture details and brighten shots. (The company did not respond to a request to elaborate on how the algorithms work.)

    AI can make a difference in the image, but the end results for each of these features also depend on the phone’s lenses, said Gartner analyst Bill Ray. A traditional camera will have the lens several centimeters from the sensor, but the limited space on a phone often requires squeezing things together, which can result in a more shallow depth of field and reduced image quality, especially in darker environments.

    “The quality of the lens is still a big deal, and how the phone addresses the lack of depth,” Ray said.

    While night photography on phones has come a long way, a buzzy new technology could push it ahead even more.

    Generative AI, the technology that powers the viral chatbot ChatGPT, has earned plenty of attention for its ability to create compelling essays and images in response to user prompts. But these AI systems, which are trained on vast troves of online data, also have potential to edit and process images.

    “In recent years, generative AI models have also been used in photo-editing functions like background removal or replacement,” Su said. If this technology is added to smartphone photo systems, it could eventually make night modes even more powerful, Su said.

    Big Tech companies, including Google, are already fully embracing this technology in other parts of their business. Meanwhile, smartphone chipset vendors like Qualcomm and MediaTek are looking to support more generative AI applications natively on consumer devices, Su said. These include image and video augmentation.

    “But this is still about two to three years away from limited versions of this showing up on smartphones,” he said.

    [ad_2]

    Source link

  • Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order | CNN Business

    [ad_1]



    CNN
     — 

    Microsoft joined a sprawling global debate on the regulation of artificial intelligence Thursday, echoing calls for a new federal agency to control the technology’s development and urging the Biden administration to approve new restrictions on how the US government uses AI tools.

    In a speech in Washington attended by multiple members of Congress and civil society groups, Microsoft President Brad Smith described AI regulation as the challenge of the 21st century, outlining a five-point plan for how democratic nations could address the risks of AI while promoting a liberal vision for the technology that could rival competing efforts from countries such as China.

    The remarks highlight how one of the largest companies in the AI industry hopes to influence the fast-moving push by governments, particularly in Europe and the United States, to rein in AI before it causes major disruptions to society and the economy.

    In a roughly hour-long appearance that was equal parts product pitch and policy proposal, Smith compared AI to the printing press and described how it could streamline policymaking and lawmakers’ constituent outreach, before calling for “the rule of law” to govern AI at every part of its lifecycle and supply chain.

    Regulations should apply to everything from the data centers that train large language models to the end users such as banks, hospitals and others that may apply the technology toward making life-altering decisions, Smith said.

    For decades, “the rule of law and a commitment to democracy has kept technology in its proper place,” Smith said. “We’ve done it before; we can do it again.”

    In his remarks, Smith joined calls made last week by OpenAI — the company behind ChatGPT and that Microsoft has invested billions in — for the creation of a new government regulator that can oversee a licensing system for cutting-edge AI development, combined with testing and safety standards as well as government-mandated disclosure rules.

    Whether a new federal regulator is needed to police AI is quickly emerging as a focal point of the debate in Washington; opponents such as IBM have argued, including in an op-ed Thursday, that AI regulation should be baked into every existing federal agency because of their understanding of the sectors they oversee and how AI may be most likely to transform them.

    Smith also called for President Joe Biden to develop and sign an executive order requiring federal agencies that procure AI tools to implement a risk management framework developed and published this year by the National Institute of Standards and Technology. That framework, which Congress first ordered with legislation in 2020, covers ways that companies can use AI responsibly and ethically.

    Such an order would leverage the US government’s immense purchasing power to shape the AI industry and encourage the voluntary adoption of best practices, Smith said.

    Microsoft itself plans to implement the NIST framework “across all of our services,” Smith added, a commitment he described as the direct outgrowth of a recent White House meeting with AI CEOs in Washington. Smith also pledged to publish an annual AI transparency report.

    As part of Microsoft’s proposal, Smith said any new rules for AI should include revamped export controls tailor-made for the AI age to prevent the technology from being abused by sanctioned entities.

    And, he said, the government should mandate redundant AI circuit breakers that would allow algorithms to be shut off by critical infrastructure providers or from within the data centers they depend on.

    Smith’s remarks, and a related policy paper, come a week after Google released its own proposals calling for global cooperation and common standards for artificial intelligence.

    “AI is too important not to regulate, and too important not to regulate well,” Kent Walker, Google’s president of global affairs, said in a blog post unveiling the company’s plan.

    [ad_2]

    Source link

  • Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    Thousands of authors demand payment from AI companies for use of copyrighted works | CNN Business

    [ad_1]


    Washington
    CNN
     — 

    Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

    The list of more than 8,000 authors includes some of the world’s most celebrated writers, including Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, James Patterson, Jodi Picoult and Philip Pullman, among others.

    In an open letter they signed, posted by the Authors Guild Tuesday, the writers accused AI companies of unfairly profiting from their work.

    “Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill,” the letter said. “You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.”

    Tuesday’s letter was addressed to the CEOs of ChatGPT-maker OpenAI, Facebook-parent Meta, Google, Stability AI, IBM and Microsoft. Most of the companies didn’t immediately respond to a request for comment. Meta, Microsoft and Stability AI declined to comment.

    Much of the tech industry is now working to develop AI tools that can generate compelling images and written work in response to user prompts. These tools are built on large language models, which are trained on vast troves of information online. But recently, there has been growing pressure on tech companies over alleged intellectual property violations with this training process.

    This month, comedian Sarah Silverman and two authors filed a copyright lawsuit against OpenAI and Meta, while a proposed class-action suit accused Google of “stealing everything ever created and shared on the internet by hundreds of millions of Americans,” including copyrighted content. Google has called the lawsuit “baseless,” saying it has been upfront for years that it uses public data to train its algorithms. OpenAI did not previously respond to a request for comment on the suit.

    In addition to demanding compensation “for the past and ongoing use of our works in your generative AI programs,” the thousands of authors who signed the letter this week called on AI companies to seek permission before using the copyrighted material. They also urged the companies to pay writers when their work is featured in the results of generative AI, “whether or not the outputs are infringing under current law.”

    The letter also cites this year’s Supreme Court holding in Warhol v Goldsmith, which found that the late artist Andy Warhol infringed on a photographer’s copyright when he created a series of silk screens based on a photograph of the late singer Prince. The court ruled that Warhol did not sufficiently “transform” the underlying photograph so as to avoid copyright infringement.

    “The high commerciality of your use argues against fair use,” the authors wrote to the AI companies.

    In May, OpenAI CEO Sam Altman appeared to acknowledge more needs to be done to address concerns from creators about how AI systems use their works.

    “We’re trying to work on new models where if an AI system is using your content, or if it’s using your style, you get paid for that,” he said at an event.

    – CNN’s Catherine Thorbecke contributed to this report.

    [ad_2]

    Source link

  • The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    The viral new ‘Drake’ and ‘Weeknd’ song is not what it seems | CNN Business

    [ad_1]



    CNN
     — 

    One of the buzziest songs recently circulating on TikTok and climbing the Spotify charts featured the familiar voices of best-selling artists Drake and the Weeknd. But there’s a twist: Drake and the Weeknd appear to have had nothing to do with it.

    The viral track, “Heart on my Sleeve,” comes from an anonymous TikTok user named Ghostwriter977, who claims to have used artificial intelligence to generate the voices of Drake and the Weeknd for the track.

    “I was a ghostwriter for years and got paid close to nothing just for major labels to profit,” Ghostwriter977 wrote in the video comments. “The future is here.”

    “Heart on my Sleeve” racked up more than 11 million views across several videos in just a few days and was streamed on Spotify hundreds of thousands of times. The original TikTok video has seemingly been taken down, and the song has since been removed from streaming services including YouTube, Apple Music and Spotify. (TikTok, YouTube, Apple and Spotify did not respond to a request for comment.)

    The exact origin of the song remains unclear, and some have suggested it could be a publicity stunt. But the stunning traction for “Heart on my Sleeve” may only add to the anxiety inside the music industry as it goes on offense against the possible threat posed by a new crop of increasingly powerful AI tools on the market.

    Universal Music Group, the music label that represents Drake, The Weeknd and numerous other superstars, sent urgent letters in April to streaming platforms, including Spotify and Apple Music, asking them to block AI platforms from training on the melodies and lyrics of their copywritten songs.

    “The training of generative AI using our artists’ music — which represents both a breach of our agreements and a violation of copyright law as well as the availability of infringing content created with generative AI on digital service providers – begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” the company said in a statement this week to CNN.

    The record label said platforms have “a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”

    But attempting to crack down on AI-generated music may pose a unique challenge. The legal landscape for AI work remains unclear, the tools to create it are widely accessible and social media makes it easier than ever to distribute it.

    AI-generated music is not new. Taryn Southern’s debut song “Break Free,” which was composed and produced with AI, hit the Top 100 radio charts back in 2018, and VAVA, an AI music artist (i.e. not a human), currently has a single out in Thailand.

    But a new crop of AI tools have made it easier than ever to quickly generate convincing images, audio, video and written work. Some services such as Boomy specifically leverage generative AI to make music creation more accessible.

    There’s little known about who is behind the Ghostwriter977 account, or which tools the creator used to make the track. The user did not respond to a CNN request for comment.

    In the bio section of the user’s TikTok account, a link directs users to a page on Laylo, a website where fans can sign up to get notifications from artists when new songs are dropped or merchandise and tickets become available. The company told CNN the account likely registered to build up its fan base and brought in “tens of thousands” of signups in the past few days.

    Laylo CEO Alec Ellin denied that the company was behind the viral track as some have speculated, but Ellin told CNN whoever did make it was “clearly a really savvy creator” and called it “a perfect example of the power of using Laylo to own your audience.”

    Michael Inouye, an analyst at ABI Research, said “Heart on my Sleeve” could have been made in several ways depending on the sophistication of the AI and level of musical talent.

    “If music artists were involved, they could create the background music and the lyrics, and then the AI model could be trained with content from Drake and The Weekend to replicate their voices and singing styles,” he said. “AI could also have generated most of the song, lyrics and replicated the artists again based on the training data set and any prompts given to direct the AI model.”

    He added that part of this fascination and virality of the song comes from “just how good AI has gotten at creating content, which includes replicating famous people.”

    Roberto Nickson, who is building an AI platform to help boost productivity and work flow, recently posted a video on Twitter showing how easy it is to record a verse and train an AI model to replace his vocals. He used the artist formerly known as Kanye West as an example.

    “The results will blow your mind,” he said. “You’re going to be listening to songs by your favorite artist that are completely indistinguishable and you’re not going to know if it’s them or not.”

    Although the entertainment industry has seen these issues coming, regulations are lagging behind the rapid pace of AI development.

    Audrey Benoualid, an entertainment lawyer based in Los Angeles, said one could argue “Heart On My Sleeve” does not infringe copyright as it appears to be an “original” composition.

    “Ghostwriter also publicized that Drake and The Weeknd were not involved in the making of the song, which could protect them from a ‘passing off’ claim, where profits are generated as consumers are misled into believing the song is actually a Drake-Weeknd collaboration,” she said in an email to CNN.

    However, Benoualid added, machine learning and generative AI programs may also be found to infringe copyright in existing works, either by making copies of those works to train the AI or by generating outputs that are substantially similar to those existing works. “Major labels would undoubtedly, and have already begun to, argue that their copyrights (and their artists’ intellectual property rights) are being infringed,” she said.

    Michael Nash, an executive VP at Universal Music Group, recently wrote in an op-ed that AI music is “diluting the market, making original creations harder to find, and violating artists’ legal rights to compensation from their work.”

    No regulations exist that dictate on what AI can and cannot train. But last month, in response to individuals looking to seek copyright for AI-generated works, the US Copyright Office released new guidance around how to register literary, musical, and artistic works made with AI.

    The copyright will be determined on a case-by-case basis, the guidance continued, based on how the AI tool operates and how it was used to create the final piece or work. The US Copyright Office announced it will also be seeking public input on how the law should apply to copywritten works the AI trains on, and how the office should treat those works.

    “AI and copyright law and the rights of musicians and labels have crashed into one another (once again), and it will take time for the dust to settle,” Benoualid said. “The landscape is anything but clear at the moment.”

    Inouye said if AI generated content becomes associated with famous individuals in a negative way that could be grounds for a lawsuit to not only take content down but to cease and desist their operations and potentially seek damage.

    “On the flip side, if the content were to be popular and the creator were to make revenue off of the artists’ image or likeness then again the artists could similarly request the content to be taken down and potentially sue for any monetary gains,” he said.

    But for now, concerned parties may be forced to play whack-a-mole. While services like Spotify pulled “Heart on my Sleeve,” versions of it appeared to continue circulating as of Tuesday on other online platforms.

    Even a song made with artificial intelligence may find real staying power online.

    – CNN’s Vanessa Yurkevich contributed to this report.

    [ad_2]

    Source link

  • White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    White House unveils an AI plan ahead of meeting with tech CEOs | CNN Business

    [ad_1]



    CNN
     — 

    The White House on Thursday announced a series of measures to address the challenges of artificial intelligence, driven by the sudden popularity of tools such as ChatGPT and amid rising concerns about the technology’s potential risks for discrimination, misinformation and privacy.

    The US government plans to introduce policies that shape how federal agencies procure and use AI systems, the White House said. The step could significantly influence the market for AI products and control how Americans interact with AI on government websites, at security checkpoints and in other settings.

    The National Science Foundation will also spend $140 million to promote research and development in AI, the White House added. The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.

    The plan comes the same day that Vice President Kamala Harris and other administration officials are expected to meet with the CEOs of Google, Microsoft, ChatGPT-creator OpenAI and Anthropic to emphasize the importance of ethical and responsible AI development. And it coincides with a UK government inquiry launched Thursday into the risks and benefits of AI.

    “Tech companies have a fundamental responsibility to make sure their products are safe and secure, and that they protect people’s rights before they’re deployed or made public,” a senior Biden administration official told reporters on a conference call.

    Officials cited a range of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses linked to rising automation, biased algorithmic decision-making, physical dangers arising from autonomous vehicles and the threat of AI-powered malicious hackers are also on the White House’s list of concerns.

    It’s just the latest example of the federal government acknowledging concerns from the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.

    Testifying before Congress, members of the Federal Trade Commission have argued AI could “turbocharge” fraud and scams. Its chair, Lina Khan, wrote in a New York Times op-ed this week that the US government has ample existing legal authority to regulate AI by leaning on its mandate to protect consumers and competition.

    Last year, the Biden administration unveiled a proposal for an AI Bill of Rights calling for developers to respect the principles of privacy, safety and equal rights as they create new AI tools.

    Earlier this year, the Commerce Department released voluntary risk management guidelines for AI that it said could help organizations and businesses “govern, map, measure and manage” the potential dangers in each part of the development cycle. In April, the Department also said it is seeking public input on the best policies for regulating AI, including through audits and industry self-regulation.

    The US government isn’t alone in seeking to shape AI development. European officials anticipate hammering out AI legislation as soon as this year that could have major implications for AI companies around the world.

    [ad_2]

    Source link

  • Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    Chinese police detain man for allegedly using ChatGPT to spread rumors online | CNN Business

    [ad_1]


    Hong Kong
    CNN
     — 

    Police in China have detained a man they say used ChatGPT to create fake news and spread it online, in what state media has called the country’s first criminal case related to the AI chatbot.

    According to a statement from police in the northwest province of Gansu, the suspect allegedly used ChatGPT to generate a bogus report about a train crash, which he then posted online for profit. The article received about 15,000 views, the police said in Sunday’s statement.

    ChatGPT, developed by Microsoft

    (MSFT)
    -backed OpenAI, is banned in China, though internet users can use virtual private networks (VPN) to access it.

    Train crashes have been a sensitive issue in China since 2011, when authorities faced pressure to explain why state media had failed to provide timely updates on a bullet train collision in the city of Wenzhou that resulted in 40 deaths.

    Gansu authorities said the suspect, surnamed Hong, was questioned in the city of Dongguan in southern Guangdong province on May 5.

    “Hong used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated,” the Gansu police said in the statement.

    “His behavior amounted to picking quarrels and provoking trouble,” they added, explaining the offense that Hong was accused of committing.

    Police said the arrest was the first in Gansu since China’s Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes. State broadcaster CGTN says it was the country’s first arrest of a person accused of using ChatGPT to fabricate and spread fake news.

    Formally known as deep synthesis, deep fake refers to highly realistic textual and visual content generated by artificial intelligence.

    The new legislation bars users from generating deep fake content on topics already prohibited by existing laws on China’s heavily censored internet. It also outlines take down procedures for content considered false or harmful.

    The arrest also came amid a 100-day campaign launched by the internet branch of the Ministry of Public Security in March to crack down on the spread of internet rumors.

    Since the beginning of the year, Chinese internet giants such as Baidu

    (BIDU)
    and Alibaba

    (BABA)
    have sought to catch up with OpenAI, launching their own versions of the ChatGPT service.

    Baidu unveiled “Wenxin Yiyan” or “ERNIE Bot” in March. Two months later, Alibaba launched “Tongyi Qianwen,” which roughly translates as seeking truth by asking a thousand questions.

    In draft guidelines issued last month to solicit public feedback, China’s cyberspace regulator said generative AI services would be required to undergo security reviews before they can operate.

    Service providers will also be required to verify users’ real identities, as well as providing details about the scale and type of data they use, their basic algorithms and other technical information.

    [ad_2]

    Source link