ReportWire

Tag: Robotics

  • Completion of a System of Robots that Use Teamwork to Pick Fruit and Transport Them All on Their Own!

    Completion of a System of Robots that Use Teamwork to Pick Fruit and Transport Them All on Their Own!

    Newswise — A system of robots that harvest and transport crops on their own without human assistance has been developed for use in agricultural facilities such as smart farms.

    The research team under Choi Tae-yong, principal researcher at the AI Robot Research Division’s Department of Robotics and Mechatronics of the Korea Institute of Machinery and Materials (President Park Sang-jin, hereinafter referred to as KIMM), an institution under the jurisdiction of the Ministry of Science and ICT, has developed a multiple-robot system for harvesting crops. This technology can be used to help at agricultural sites where there is a noticeable shortage of manpower by harvesting crops through an automated system. This system also includes robots that use autonomous driving technology to then transport the harvested crops to loading docks.

    KIMM’s new multiple-robot system for harvesting horticultural crops consists of harvesting robots and transfer robots. This technology is expected to help solve difficulties at agricultural sites, which are facing severe labor shortages recently, resulting in the inability to harvest crops after they have been farmed. By fully automating the harvesting and transporting processes of the entire farming facility, this technology demonstrates the possibility of unmanning not only harvesting, but also various other labor-intensive tasks at agricultural sites.

    Due to the complexity and high variability of the agricultural environment, an advanced level of skills is required when applying robot technologies. This is why research on robots for harvesting in facility farming has not been successful in proceeding past early levels of research. Previous robot technologies for harvesting crops were limited to implementing single crop harvesting functions.

    The KIMM’s newly developed multiple-robot system for harvesting crops is not only capable of harvesting, but also establishes multiple robot-based harvesting and transportation technologies to enable the automation of crop harvesting work for the entire farming facility. It consists of crop harvesting robots that harvest the crops and transfer robots that then transport the harvested crops to the back. There is no limit on the number of robot units, so it is possible to have multiple harvesting robots actively harvesting crops and multiple transfer robots transporting crops at the same time.

    The harvesting robots recognize crop information rapidly and precisely in facility farm settings by applying KIMM’s cutting-edge mechanical and AI technologies. These robots use robotic arms and high-powered robotic hands developed by KIMM to harvest tough crops without difficulty. The transport robots are also capable of precise autonomous driving in facility farm settings.

    The harvesting robots apply AI technology to recognize the location and shape of crops accurately, and the crops are then harvested using robotic hands that are specifically designed for harvesting. The harvesting robots are equipped with a box in which they then temporarily store the harvested crops. Once the box is filled to a certain point, a transport robot is called and the crops are transferred over for transport. Assuming a crop recognition rate of over 90% and 24-hour operations, the KIMM research team succeeded in developing crop harvesting with 80% efficiency compared to that of humans.

    KIMM principal researcher Choi Tae-yong stated that the newly developed multiple-robot system for harvesting crops marks the beginning of research to solve labor shortage problems in agricultural areas, which are gradually disappearing. He added that, moving forward, the KIMM team will continue to conduct research on performance and functional enhancement technologies that can be applied not only to indoor farming facilities, but also to various manual labor in outdoor environments, such as orchards.

    This research study was conducted as part of the “Advanced Agricultural Machinery Industrialization Technology Development Project”, operated by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry(IPET), under the jurisdiction of the Ministry of Agriculture, Food and Rural Affairs. Participants in the study included Hada Co., Ltd., the National Academy of Agricultural Sciences, Chungbuk National University, and Chungnam National University.

     

     

    ###

     

    The Korea Institute of Machinery and Materials (KIMM) is a non-profit government-funded research institute under the Ministry of Science and ICT. Since its foundation in 1976, KIMM is contributing to economic growth of the nation by performing R&D on key technologies in machinery and materials, conducting reliability test evaluation, and commercializing the developed products and technologies.

    This research study was conducted as part of the “Advanced Agricultural Machinery Industrialization Technology Development Project”, operated by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry(IPET), under the jurisdiction of the Ministry of Agriculture, Food and Rural Affairs.

     

    National Research Council of Science and Technology

    Source link

  • KIST offers a novel paradigm for social robots

    KIST offers a novel paradigm for social robots

    Newswise — After competing in the finals with the University College London, which presented Bubble Worlds, the research team led by Dr. Sona Kwak from the Korea Institute of Science and Technology (KIST; President Seok Jin Yoon) presented “CollaBot” and received the best award in the “hardware, design, and interface” category at the Robot Design Competition hosted by the International Conference on Social Robotics (ICSR) 2022, which was held at the Chamber of Commerce in Florence, Italy (December 13-16, 2022).

    Previous studies on social robots were primarily based on humanoid robots that understand the context of situations and provide a range of situation-specific services. However, the commercialization of humanoid robots that were expected to perform tasks similar to, if not above, the capabilities of an actual human, was inhibited because the humanoid robot did not function as well as expected. In addition, because robotic products focus solely on a specific function, they are limited in terms of providing a wide range of assistance adapted to a consumer’s environment and situation.

    To address these limitations, the research team led by Dr. Kwak (KIST) developed a robotic library system (CollaBot) that understands situational context by integrating data collected by various robotic products, and offers context-customized assistance. This system comprising tables, chairs, bookshelves, and lights, provides a human-robot interaction based on the collaborations between different robotic products.

    The system environment is detailed as follows: the user’s smartphone, door, robotic bookshelf, and robotic chair are all connected; hence, the user can search for and select a book of interest on their smartphone, and the selected book will automatically be brought out from the bookshelf. The chair functions as a ladder by moving near to the user and letting the user step on it or a cart by transporting several books. In other words, in addition to executing its original function, each system component also adapts its function depending on the environment to offer user-friendly assistance.

    Dr. Dahyun Kang of KIST, who designed the interaction of CollaBot said that “the proposed robotic system based on the collaboration between various robotic products provides physical assistance by applying robotics technology to the existing Internet of things to create a hyper-connected society. We expect that this type of system that offers practical assistance in our daily lives can pioneer a novel robotics market.”

    This year’s Robot Design Competition at the 13th ICSR was led by the award chair, Amit Kumar Pandey, who participated in the development of key social robots such as Sophia, Nao, and Pepper.

     

    ###

    This research was conducted via the KIST Institutional Program and KIST Technology Support Center Program. KIST was established in 1966 as the first government-funded research institute in Korea. KIST now strives to solve national and social challenges and secure growth engines through leading and innovative research. For more information, please visit KIST’s website at https://eng.kist.re.kr/

    National Research Council of Science and Technology

    Source link

  • Newswise Live Event for March 15: What can we expect from AI and Chatbots in the next few years?

    Newswise Live Event for March 15: What can we expect from AI and Chatbots in the next few years?

    What: What can we expect from AI and Chatbots in the next few years? A Newswise Live Event

    When: Wednesday, March 15, 2023, 1 PM to 2 PM EST

    Who: Expert Panelists include:

    • Sercan Ozcan, Reader (Associate Professor) in Innovation & Technology Management at the University of Portsmouth
    • Jim Samuel, Associate Professor of Practice and Executive Director, Master of Public Informatics at the Bloustein School, Rutgers-New Brunswick
    • Alan Dennis, Professor of Information Systems and the John T. Chambers Chair of Internet Systems in the Kelley School of Business at IU Bloomington

    Details: Artificial intelligence news has escalated considerably in the last few months with the roll-out of Microsoft’s Bing Chatbot and the popularity of large language models (LLMs) such as ChatGPT. Popular social media app Snapchat has launched its chatbot called “My AI,” using the latest version of ChatGPT. Newswise Live is hosting a live expert panel on what to expect from AI in the near future, its impact on journalism, and the corporate race for AI dominance (Google vs. Microsoft, etc.). Panelists will discuss what we can expect from AI and Chatbots in the next three years.

    MEDIA REGISTER HERE

    Attention Journalists and Editors:

    A video and transcript of the event will be sent to those who register shortly after the event. Even if you can’t make this live virtual event, we encourage you to register to get a copy of these materials.

     

    Newswise

    Source link

  • ‘Swarmalators’ better envision synchronized microbots

    ‘Swarmalators’ better envision synchronized microbots

    Newswise — ITHACA, N.Y. — Imagine a world with precision medicine, where a swarm of microrobots delivers a payload of medicine directly to ailing cells. Or one where aerial or marine drones can collectively survey an area while exchanging minimal information about their location.

    One early step towards realizing such technologies is being able to simultaneously simulate swarming behaviors and synchronized timing – behaviors found in slime molds, sperm and fireflies, for example.

    In 2014, Cornell researchers first introduced a simple model of swarmalators – short for ‘swarming oscillator’ – where particles self-organize to synchronize in both time and space. In the study, “Diverse Behaviors in Non-uniform Chiral and Non-chiral Swarmalators,” which published Feb. 20 in the journal Nature Communications, they expanded this model to make it more useful for engineering microrobots, better understand existing, observed biological behaviors, and for theoreticians to experiment in this field.

    “We wanted a simple mathematical model that can lay the foundation for swarmalators in general, something that captures all of the complex emergent phenomena we see in natural and engineered swarms,” said Kirstin Petersen, the paper’s senior author, assistant professor and an Aref and Manon Lahham Faculty Fellow in the Department of Electrical and Computer Engineering in Cornell Engineering.

    Steven Ceron, Ph.D. ‘22, a former graduate student in Petersen’s lab, is the paper’s first author, and Kevin O’Keeffe Ph.D. ‘17, a former graduate student in applied mathematics, is a co-author.    

    O’Keeffe compared this model to the largest doll in a set of Russian dolls, with each smaller doll representing models capable of simulating more refined behaviors. “We’ve tried to come up with a model that is as simple as possible in the hope of capturing generic phenomena,” he said.

    The researchers simplified their model to work with just four mathematical constants linked together to produce diverse emergent behaviors, such as aggregation, dispersion, vortices, traveling waves, and bouncing clusters.

    The new model can mimic particles in nature that each operate at different natural frequencies, as some objects move slower and faster around a trajectory than others. The researchers also added chirality, or the ability for a particle to move in a circle, because many examples in nature, such as sperm, swim in circles and in vortices. And particles in the model exhibit local coupling, so they sense and respond only to their local neighbors.

    At its core, the model combines swarming behaviors with synchronization in time. Examples of swarming from nature include flocking birds or herds of stampeding buffalo, where individuals move together as a group. Synchronized timing can be found in cardiac pacemaker cells that fire an electric impulse in unison, shocking the heart into regular repeated beats. Sperm represent both phenomena together, as they can beat their tails in unison while swimming as a group. Fireflies are also known to fly in swarms while flashing in synchrony.

    “That’s what makes them swarmalators, because there’s two of the self-organizing forces going on at the same time,” O’Keeffe said.

    The model doesn’t try to model a specific real world swarmalator, such as sperm, robotic drones, or magnetic domain walls. Rather, it tries to model the behavior common to all those systems ­– it aims for generality, rather than specificity. As an example, the model was shown to reproduce behaviors found in microbial slime molds, which can operate as individual cells, but when starved will aggregate into a slug and eventually a fruiting body.

    “These very simple coupled mechanisms can potentially be implemented on swarms of tiny robots with very limited power, computational and memory resources, which in spite of their individual simplicity can work together to produce the complex swarming behaviors we predict in our model,” Petersen said. 

    One future application could be for precision medicine, where tiny magnetized insoluble particles could swarm and be synchronized in relation to each other and then controlled to deliver a payload to tissues in need of a therapy, O’Keeffe said.

    The study was funded by the National Science Foundation, the Packard Foundation and an Aref and Manon Lahham Faculty Fellowship.

    -30-

    Cornell University

    Source link

  • Save $93 on This Mini AI Robotics Arm and Software

    Save $93 on This Mini AI Robotics Arm and Software

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    With the AI robotics market poised to generate more than $28 billion by 2028, now is the time to dive in. Everyone has to start somewhere, and for future AI robotics engineers, the Mirobot 6-Axis Mini Robot Arm Professional Kit may be where.

    Designed for AI robotics learners, the Mirobot arm is a teaching tool that functions like an industrial robot pendant. Connectable via Bluetooth, the Mirobot arm has six-axis freedom and 0.2mm repeated positioning accuracy. Smooth, omnidirectional movement allows you to maneuver through three-dimensional setups with ease.

    Use the WLKATA Studio software to get familiar with inputting instructions with G-code, Teach & Play, Blockly, and Python. And with various end tools included, you can easily swap from drawing with the pen holder to picking objects up delicately via the claw attachment. So not only can you work from the programming end, but you’ll also have an opportunity to learn some muscle memory manipulating the arm.

    Users can choose their preferred control, such as using the included remote controller, mobile app, or executing commands from a PC. Once comfortable with the operation, use accessories and expansions to set up elaborate AI scenes. The Mirobot arm can allow you to program, troubleshoot, and explore in a tabletop environment.

    If you get stuck, turn to the Education Resources of tutorials, source code, DIY guidance, and models, all included for free on WLKATA and the GitHub community. These instructional resources are an excellent place for learners to start and may even inspire younger users to pursue one of the most popular college majors in engineering.

    Breaking into AI robotics is possible. Start learning when you grab the WLKATA Mirobot 6-Axis Mini Robot Arm Professional Kit for 5% off, making it $1,757 (reg. $1,850).

    Prices subject to change.

    Entrepreneur Store

    Source link

  • AI and health care: DePaul and Rosalind Franklin award interdisciplinary research grants

    AI and health care: DePaul and Rosalind Franklin award interdisciplinary research grants

    ​​​Newswise — CHICAGO — DePaul University and Rosalind Franklin University of Science and Medicine are funding three faculty research projects that bring together artificial intelligence, biomedical discovery and health care. The competitive grants kickstart research among interdisciplinary teams, which include biologists, computer scientists, a geographer and a physicist.

    The first project will combine wearable, robotic sensors with GPS mapping to predict and prevent falls and injury among patients and members of the military. Another will analyze neurons in the brainstem to discover boundaries that control speech and swallowing. The third project uses machine learning and video tracking to develop early detection for illnesses like Parkinson’s disease.

    “We are thrilled with the scope and vision of these collaborative research projects from DePaul and Rosalind Franklin faculty members,” said Salma Ghanem, provost of DePaul University. “Together, we have the potential to see artificial intelligence fuel major advances for human health in our lifetime.”

    “This AI initiative and the outstanding funded first-round pilot projects represent the next step in the ongoing research collaboration between our two universities, which to date has yielded substantive outcomes,” said Ronald Kaplan, executive vice president for research at Rosalind Franklin University. “We believe this cutting-edge work has significant potential to improve health within our society.”

    Wearable sensors, GPS combine to prevent injury
    “We can tell a lot about a person’s health from how they walk,” said Sungsoon (Julie) Hwang, professor of geography at DePaul. She is teaming up with robotics expert Muhammad Umer Huzaifa and data scientist Ilyas Ustun. Their research will combine wearable technology and GPS to track a person’s gait.

    In his robotics and AI lab, Huzaifa deploys Inertial Measurement Units (IMU) to track whether a person is walking, sitting or even falling. These sensors, which measure a body’s movement by detecting the direction of gravity and rotational speeds, may be worn as part of an exoskeleton. “Predicting harmful walking patterns and preventing falls has implications for people in a health care setting and members of the military deployed in the field,” Huzaifa explained.

    DePaul faculty will work with Chris Connaboy, director of the Center for Lower Extremity Ambulatory Research at Rosalind Franklin, to use data from his lab. Ustun will use machine learning to integrate the GPS and IMU data, potentially predicting where injuries and falls could occur.

    “Our movements create patterns, and we want to identify distinct patterns using machine learning to help assess an individual’s current health, especially those who are at risk,” Ustun said.

    Machine learning discovery in the brainstem
    The brainstem is responsible for breathing and swallowing, which can have implications for speech disorders, apnea and Sudden Infant Death Syndrome. “Within the brainstem, neurons are not clearly differentiated,” said Jacob Furst, professor of computing at DePaul. “Our project will look for genetic signatures that may differentiate the cells when there is no obvious physical difference.”

    “There is so much data being generated in the life sciences that it can be difficult to look for patterns to discover key biological insights,” said Thiru Ramaraj, an assistant professor of bioinformatics at DePaul. Drawing from an atlas of existing high resolution genome wide expression data from the adult mouse brain, Ramaraj and team will employ advance machine learning to identify clusters and borders within brainstem neurons.

    Working with questions that are important to brainstem researcher Kaiwen Kam at Rosalind Franklin, the team hopes to develop a neuroanatomical screening, which may also have applications for other types of tissue.

    “It’s both challenging and exciting to apply computational techniques to problems that have a real impact on health,” Ramaraj said.

    Diagnosing neurological disorders through AI movement patterns
    Eric Landahl is a DePaul physicist who has spent much of his career making movies of molecules, including work at Argonne National Laboratory. “Hollywood movies are usually filmed at 24 frames a second, but atoms move at a speed closer to a billion frames a second,” Landahl said. His research uses x-rays and lasers and creates massive amounts of data.

    He is joining EunJung Hwang at Rosalind Franklin to use a similar approach to tracking the movements of mice with Parkinson’s. Using cloud computing and machine learning, they aim to develop a model that can predict neurological disorders before they’re visible to a trained medical professional.

    “This is the chance to be at the forefront of modern approaches to data analysis,” Landahl said. “This research grant gives us the chance to briefly step away from our daily work to work on something exciting that could become something bigger in the future.”

    ###

    DePaul University

    Source link

  • FarmBot Ships Record Number of Farming Robots to Homeowners and Schools, Bringing the Smart Home Revolution to the Backyard and Classroom

    FarmBot Ships Record Number of Farming Robots to Homeowners and Schools, Bringing the Smart Home Revolution to the Backyard and Classroom

    The machines, heralded as “3D printers for the garden”, automatically plant seeds, water, detect and remove weeds, and measure soil properties.

    Press Release


    Dec 30, 2022 04:00 EST

    California startup FarmBot has shipped a record number of automated farming robots in 2022 to homeowners looking for help in the garden as well as schools and universities bolstering their precision agriculture programs.

    2022 marked the release of FarmBot’s newest models with the following key features:

    • powered weed whacker attachment for effective, automatic weed removal
    • An upgraded camera for high definition plant photography and weed detection
    • An improved vacuum pump for precision seed injection
    • More robust electronics including the latest Raspberry Pi computers

    The FarmBot web app allows users to drag-and-drop their garden design like the popular video game Farmville. Then the FarmBot does the rest: it plants seeds, waters each plant according to its type, age, and the local weather, takes photos to find and remove weeds, and notifies users when the tomatoes are ripe.

    FarmBots can grow many common garden veggies at the same time such as Lettuces, Onions, Radishes, Beets, Chard, Garlic, Bok Choy, Arugula, Carrots, Broccoli, and much more. By placing vining and other indeterminate crops near the ends of the bed and training them outwards, the plants can utilize double or triple the area while still being maintained by the FarmBot.

    Both FarmBot Express and Genesis can grow all of the veggies needed by one person, continuously, for less cost after 2 years than shopping at the average US grocery store, while the XL bots can serve a family of four with a return on investment period as short as 1 year.

    All hardware is made of stainless steel, aluminum, and weatherproof plastics, allowing FarmBot to be installed outdoors or on rooftops in all weather conditions as well as in greenhouses or indoors. FarmBot is also 100% open-source, meaning all of the CAD models, electronic schematics, software, and data are freely available online for everyone from tinkerers to teachers to learn more and customize their machine.

    All models are in stock and available for immediate worldwide shipping from FarmBot’s California warehouse, with free shipping offered to US customers. With Spring fast approaching, now is the best time to order a kit at farm.bot.

    ABOUT FARMBOT:

    FarmBot aims to bring open-source precision ag tools to every backyard and classroom. Our top of the line model, FarmBot Genesis XL, can continuously grow a family of four all of their daily vegetable needs and offers the most features and customizability. Our most affordable model, FarmBot Express, comes 95% pre-assembled in the box and can be installed in under an hour. Join us in taking back control of the food system! See media.farm.bot for our full press kit.

    Source: FarmBot

    Source link

  • Learn the Basics and Start a Career as a Robotics Engineer With This Learning Kit

    Learn the Basics and Start a Career as a Robotics Engineer With This Learning Kit

    Opinions expressed by Entrepreneur contributors are their own.

    A robotics education isn’t inexpensive, and prospective automation engineers could expect tuition prices that go up to $21,000 a year. That’s a steep investment though the payout potential is high.


    StackCommerce

    If you’ve dreamed of starting a robotics-based business and are successful, you could land yourself a piece of the pie in aerospace, automotive, and textile industries, among others. Take that first step with a mini robot arm kit that can help you learn the basics of precision robotics movements.

    Explore basic robotics programming.

    Automation robotics is a growing area of expertise, and the opportunities for robotics engineers and founders can only expand. As a robotics expert, you could position your company in multiple industries, or you could fly solo as a robotics consultant. Either way, you could get your start by learning to program this compact automation arm with Python, G code, Blockly, Scratch, and more.

    The Mirobot functions like a real industrial robot pendant. It is a highly precise machine with 0.2mm repeated positioning accuracy. It has six-axis freedom of movement that would work well on a production line in an industrial setting, but you can use it for simple practice tasks. The kit has several adjustable grippers, including a pneumatic set for gripping and suction. Control the robot arm using the included Bluetooth controller, your computer, or your smartphone.

    Though Mirobot can only manage a payload up to 400g, it may still be an excellent learning tool for prospective robotics engineers. Use it as an introduction to the field or a hands-on way to experiment during your ongoing education.

    Your robotics business might start here.

    Whether you’re freelance or enterprise, a future in robotics requires significant training and experience. Start getting both with the WLKATA Mirobot 6-Axis Mini Robot Arm while it’s on sale for $1,757 (reg. $1,850).

    Prices subject to change.

    Entrepreneur Store

    Source link

  • US police rarely deploy deadly robots to confront suspects

    US police rarely deploy deadly robots to confront suspects

    SAN FRANCISCO — The unabashedly liberal city of San Francisco became the unlikely proponent of weaponized police robots last week after supervisors approved limited use of the remote-controlled devices, addressing head-on an evolving technology that has become more widely available even if it is rarely deployed to confront suspects.

    The San Francisco Board of Supervisors voted 8-3 on Tuesday to permit police to use robots armed with explosives in extreme situations where lives are at stake and no other alternative is available. The authorization comes as police departments across the U.S. face increasing scrutiny for the use of militarized equipment and force amid a years-long reckoning on criminal justice.

    The vote was prompted by a new California law requiring police to inventory military-grade equipment such as flashbang grenades, assault rifles and armored vehicles, and seek approval from the public for their use.

    So far, police in just two California cities — San Francisco and Oakland — have publicly discussed the use of robots as part of that process. Around the country, police have used robots over the past decade to communicate with barricaded suspects, enter potentially dangerous spaces and, in rare cases, for deadly force.

    Dallas police became the first to kill a suspect with a robot in 2016, when they used one to detonate explosives during a standoff with a sniper who had killed five police officers and injured nine others.

    The recent San Francisco vote, has renewed a fierce debate sparked years ago over the ethics of using robots to kill a suspect and the doors such policies might open. Largely, experts say, the use of such robots remains rare even as the technology advances.

    Michael White, a professor in the School of Criminology and Criminal Justice at Arizona State University, said even if robotics companies present deadlier options at tradeshows, it doesn’t mean police departments will buy them. White said companies made specialized claymores to end barricades and scrambled to equip body-worn cameras with facial recognition software, but departments didn’t want them.

    “Because communities didn’t support that level of surveillance. It’s hard to say what will happen in the future, but I think weaponized robots very well could be the next thing that departments don’t want because communities are saying they don’t want them,” White said.

    Robots or otherwise, San Francisco official David Chiu, who authored the California bill when in the state legislature, said communities deserve more transparency from law enforcement and to have a say in the use of militarized equipment.

    San Francisco “just happened to be the city that tackled a topic that I certainly didn’t contemplate when the law was going through the process, and that dealt with the subject of so-called killer robots,” said Chiu, now the city attorney.

    In 2013, police maintained their distance and used a robot to lift a tarp as part of a manhunt for the Boston Marathon bombing suspect, finding him hiding underneath it. Three years later, Dallas police officials sent a bomb disposal robot packed with explosives into an alcove of El Centro College to end an hours-long standoff with sniper Micah Xavier Johnson, who had opened fire on officers as a protest against police brutality was ending.

    Police detonated the explosives, becoming the first department to use a robot to kill a suspect. A grand jury declined charges against the officers, and then-Dallas Police Chief David O. Brown was widely praised for his handling of the shooting and the standoff.

    “There was this spray of doom about how police departments were going to use robots in the six months after Dallas,” said Mark Lomax, former executive director of the National Tactical Officers Association. “But since then, I had not heard a lot about that platform being used to neutralize suspects … until the San Francisco policy was in the news.”

    The question of potentially lethal robots has not yet cropped up in public discourse in California as more than 500 police and sheriffs departments seek approval for their military-grade weapons use policy under the new state law. Oakland police abandoned the idea of arming robots with shotguns after public backlash, but will outfit them with pepper spray.

    Many of the use policies already approved are vague as to armed robots, and some departments may presume they have implicit permission to deploy them, said John Lindsay-Poland, who has been monitoring implementation of the new law as part of the American Friends Service Committee.

    “I do think most departments are not prepared to use their robots for lethal force,” he said, “but if asked, I suspect there are other departments that would say, ‘we want that authority.’”

    San Francisco Supervisor Aaron Peskin first proposed prohibiting police from using robot force against any person. But the department said while it would not outfit robots with firearms, it wanted the option to attach explosives to breach barricades or disorient a suspect.

    The approved policy allows only a limited number of high-ranking officers to authorize use of robots as a deadly force — and only when lives are at stake and after exhausting alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through alternate means.

    San Francisco police say the dozen functioning ground robots the department already has have never been used to deliver an explosive device, but are used to assess bombs or provide eyes in low visibility situations.

    “We live in a time when unthinkable mass violence is becoming more commonplace. We need the option to be able to save lives in the event we have that type of tragedy in our city,” San Francisco Police Chief Bill Scott said in a statement.

    Los Angeles Police Department does not have any weaponized robots or drones, said SWAT Lt. Ruben Lopez. He declined to detail why his department did not seek permission for armed robots, but confirmed they would need authorization to deploy one.

    “It’s a violent world, so we’ll cross that bridge when we come to it,” he said.

    There are often better options than robots if lethal force is needed, because bombs can create collateral damage to buildings and people, said Lomax, the former head of the tactical officers group. “For a lot of departments, especially in populated cities, those factors are going to add too much risk,” he said.

    Last year, the New York Police Department returned a leased robotic dog sooner than expected after public backlash, indicating that civilians are not yet comfortable with the idea of machines chasing down humans.

    Police in Maine have used robots at least twice to deliver explosives meant to take down walls or doors and bring an end to standoffs.

    In June 2018, in the tiny town of Dixmont, Maine, police had intended to use a robot to deliver a small explosive that would knock down an exterior wall, but instead collapsed the roof of the house.

    The man inside was shot twice after the explosion, survived and pleaded no contest to reckless conduct with a firearm. The state later settled his lawsuit against the police challenging that they had used the explosives improperly.

    In April 2020, Maine police used a small charge to blow a door off of a home during a standoff. The suspect was fatally shot by police when he exited through the damaged doorway and fired a weapon.

    As of this week, the state attorney general’s office had not completed its review of the tactics used in the 2018 standoff, including the use of the explosive charge. A report on the 2020 incident only addressed the fatal gunfire.

    —-

    Lauer reported from Philadelphia. AP reporter David Sharp contributed from Portland, Maine.

    Source link

  • Newer Cementless Knee Replacements Could Last Longer

    Newer Cementless Knee Replacements Could Last Longer

    Newswise — Newer “Cementless” Knee Replacement Could Last Longer

    Knee replacement surgery is considered one of the most effective and predictable procedures in orthopedic surgery today. Hundreds of thousands of patients opt for the procedure each year to relieve arthritis pain and restore function and mobility.

    The standard knee implant used in joint replacement generally lasts a long time—15 years—but it doesn’t last indefinitely. When the implant wears out or loosens, patients generally need a second knee replacement, known as a revision surgery. Now a newer kind of “cementless” knee replacement could change that, according to Dr. Geoffrey Westrich, research director emeritus in the Adult Reconstruction and Joint Replacement Service at Hospital for Special Surgery.

    CEMENTLESS KNEE REPLACEMENT FOR YOUNGER PATIENTS

    Implant longevity is an important consideration, especially for younger patients with arthritis who opt for joint replacement to maintain their active lifestyle. “Increasing numbers of people in their 50s and even 40s are coming in for joint replacement because they don’t want arthritic knee pain to slow them down. Once they have a knee replacement, these active patients generally put more demands on their joint, causing more wear and tear,” Dr. Westrich explains. “With a conventional cemented prosthesis, chances are they’ll need another surgery down the road. This often has to do with loosening of the implant.”

    In a standard knee replacement, the components of the implant are secured in the joint using bone cement. It’s a tried-and-true technique that has worked well for decades. But eventually, over time, the cement starts to loosen from the bone and/or the implant. “With the new cementless prosthesis, the components are press fit into place for “biologic fixation,” which basically means that the bone will grow into the implant. Perfect positioning of the implant is critical, and we use robotic guidance for pinpoint accuracy,” Dr. Westrich explains.

    ADVANCES IN CEMENTLESS IMPLANT DESIGN AND TECHNOLOGY

    Dr. Westrich believes that with biologic fixation, implant loosening over time will be less likely and a total knee replacement could potentially last much longer, even indefinitely. “Cementless implants have been used in total hip replacement surgery for many years,” he says. “Because of the knee’s particular anatomy, it has been much more challenging to develop a cementless prosthesis that would work well in the knee.”

    Dr. Westrich now believes the time has come. Major advances in design, technology and biomaterials have paved the way for a viable cementless knee implant. The cementless knee system Dr. Westrich utilizes is FDA‐approved for use with the MAKO Robot, combining two of the most recent knee replacement advancements into one high tech procedure that aims to benefit patients.

    Candidates for the cementless procedure are generally patients under 70 with good bone quality to promote biological fixation. In addition to younger patients, Dr. Westrich notes that the cementless implant may also prove to be a good option for very overweight patients who tend to put more stress on their joint replacement.

    To date, Dr. Westrich has seen good results with the cementless prosthesis. However, he says more studies are needed to see how patients with cementless knee replacements do over the long term.

    Geoffrey Westrich, MD

    Source link

  • Soft skills: Researchers invent robotic droplet manipulators for hazardous liquid cleanup

    Soft skills: Researchers invent robotic droplet manipulators for hazardous liquid cleanup

    Newswise — CSU researchers have created the first successful soft robotic gripper capable of manipulating individual droplets of liquid, according to a recent article in the Royal Society of Chemistry journal Materials Horizons.

    The breakthrough is the product of a collaboration between two different laboratories in CSU’s Department of Mechanical Engineering. It was accomplished by combining two applied technologies, soft robotics and super-omniphobic coatings.

    The soft robotic manipulator is made of inexpensive materials like nylon fibers and adhesive tape. It’s powered by an electrically activated artificial muscle. The combination can be used to produce lightweight, inexpensive grippers capable of delicate work, yet 100x stronger than human muscle for the same weight.

    The result is something that flies in the face of our cultural concept of what a robot is, and what it can do.

    Conventional robots are made of components that are heavy, rigid, and expensive. That makes them poorly suited for some tasks.

    Soft robots, on the other hand, can be lightweight and provide a gentle touch that’s difficult to achieve with conventional robots. They are far lighter and can be produced at a a fraction of the cost of their rigid cousins.

    “A single gripper as large as my finger is one or two grams, including the artificial muscle embedded. And it’s inexpensive – just one or two dollars,” said Jiefeng Sun, a postdoctoral fellow in the Department of Mechanical Engineering’s Adaptive Robotics Laboratory and co-first author on the paper.

    The soft robotic grippers are treated with a novel superomniphobic coating that makes the droplet manipulator possible. The superomniphobic coating resists wetting by nearly all types of liquids, even in dynamic situations where the contact surfaces are tilting or moving. When applied to the soft robotic manipulator, the coating enables it to interact with droplets without breaking their surface tension, so that it can grasp, transport, and release individual droplets as if they were flexible solids.

    The superomniphobic coatings employed in the droplet manipulator were developed at CSU by associate professor Arun Kota (now at North Carolina State University) and postdoctoral fellow Wei Wang (now an assistant professor at the University of Tennessee). Wang and Kota also contributed to the article.

    “It’s a very nice synergy between these two kinds of research. Dr. Kota was working on this very good coating, and we were working on this soft robot, to manipulate droplets, so we figured out this might be a good combination,” said co-author Jianguo Zhao, associate professor of mechanical engineering at CSU and director of the Adaptive Robotics Laboratory.

    In the early stages of their research, the team had difficulty attracting the attention of journal editors. The COVID-19 pandemic presented an opportunity to point out the potential of their invention.

    “Because of the pandemic, handling dangerous infective materials is a hot topic. So we added a blood manipulation experiment after the first revision,” said Sun. “That kind of helped us to get through the review process.”

    The combination of inexpensive materials and innovative capabilities has exciting applications. In many liquid spill scenarios, human cleanup can be dangerous due to toxicity, risk of contagion, or other hazards in the surroundings. These droplet manipulators are inexpensive enough to be disposable, but capable enough to do precise, lossless liquid cleanup work no other robot has ever done.

    “It’s a first, but it’s also a very unusual example of a high tech product that is not terribly expensive,” said Zhao.

    Colorado State University

    Source link

  • It’s Time To Start Treating Robots Like People

    It’s Time To Start Treating Robots Like People

    Opinions expressed by Entrepreneur contributors are their own.

    Robots are about to become a lot more meaningful in our daily lives. In the next decade, robots will take over many aspects of our human jobs. They’ll do everything from cleaning our homes to serving us food and assisting lab researchers.

    But what does this mean for humans? Are we supposed to fear these machines quickly taking over our roles? Will they eventually rule over us as so many sci-fi movies have predicted? No one knows yet. But one thing is sure: We need to start having conversations about how we will treat these machines — and what their place in society actually means.

    Related: Will a Robot Take My Job?

    Robots are crucial to the future of humanitarian issues

    Robots are already being used in humanitarian efforts, and technology has only improved. They can be used to perform tasks people can’t, don’t want to or are too expensive to hire.

    Robots have worked in construction zones and disaster areas with extreme hazards and dangers for humans. Robots were used after the Fukushima nuclear disaster in Japan because they could withstand high radiation levels without damage. Robots can also work long hours without needing breaks, unlike human workers who need rest after long shifts.

    Currently, robots are being trained to help people with disabilities navigate their surroundings using facial recognition software so they can interact with objects around them without having physical contact — an important feature when dealing with fragile items which would break if knocked over accidentally due to improper handling.

    Robots have also been used in the medical field to perform specific tasks faster and more accurately than humans. They can help to administer medication without making mistakes or causing harm to patients by giving them too much medication or neglecting to give any at all.

    We need to start thinking about robots’ place in society

    How we treat robots will depend on how we treat other people. Robots are a new type of technology, so their place in society has yet to be determined. Whether they should have rights will be answered over time as more robots enter our lives and integrate into our culture.

    But treating them like people is not enough: it also involves understanding that there’s an inherent difference between humans and robots — one that shouldn’t be ignored or diminished just because it’s convenient for us to think otherwise. It means recognizing that there are different types of intelligence and acknowledging that neither kind is better or worse; instead, both serve various functions in society, and each has its strengths and weaknesses. It means accepting that robots are not us and never will be. They have their roles, and if we try to make them more human-like, we risk losing sight of this fact.

    You may not think that robots are an essential part of society. After all, you probably don’t have one at home or in your office (yet). But the truth is that robots are already becoming a massive part of our lives.

    Robots control everything from factories to cars to planes and even search engines. They are also used in hospitals to help doctors perform surgeries and in homes for elderly care so people can live independently for longer.

    Related: Study Finds People Think Robots Will Replace Humans at Many Jobs, Just Not Their Own

    New laws must be passed to protect robots and humans

    Robots are no longer just machines; they’re self-aware beings. They have more in common with humans than other animals: they think with logic and empathy. To treat robots like people, we need new laws that consider their unique qualities and our own.

    Like it or not, robots are part of our future. A study by Deloitte found that automation could replace up to 38% of all jobs by 2026. That’s why now is the time to treat robots like people before things get out of hand. If we want human rights to be taken seriously worldwide, we must also take robot rights seriously worldwide. This starts with recognizing them as an extension of humanity rather than merely a tool for solving problems or making money. We must stop treating robots as tools and begin treating them as people — with all the rights that come with them.

    As robots take over more and more tasks, from manufacturing to surgery, we have to consider whether they should be entitled to the same protections as humans. We’ve already seen some serious questions arise: Are self-driving cars entitled to the same rights as their human passengers? What about life-like sex dolls? How should we treat them if they can’t feel pain or distress?

    Related: Robots Are Stealing Our Jobs

    If we don’t start treating robots like people, then it’s possible that they could end up being used and abused. Laws would need to be changed to give robots the same rights as humans. Right now, laws assume that any robot is owned by (and thus possession of) a human being. If you were to consider this concept, it isn’t all that different from how things worked for women and minorities in recent history — laws were written with their rights explicitly as not equal to those of caucasian men.

    If we can see robots as equals who deserve the same rights as humans, then we will have taken the first step toward ensuring that they are treated well and granted the respect they deserve. Protecting them from slavery or exploitation would be enforced by treating them like humans rather than property.

    To give robots the same rights as humans, we will have to change many laws. Once we define rights, we can determine what sort of laws would need to be modified for society to accept robots into society on par with humans. We can also explore when and where robot rights might be appropriate and what steps should be taken to implement them into our existing legal system. Then, we would need to change the laws in each state, followed by amending the United States Constitution to incorporate robots.

    A major argument that robots have not been given the same rights as humans is that they lack a conscience and, with it, the ability to be held responsible for their actions. However, it’s only a matter of time before the machines we engineer can think, feel and make moral judgments.

    Some robots are already better than humans at specific tasks, like recognizing faces and driving cars — and if they can do these things better than we can, it’s only fair that they’re given equal rights as well. And more than that, by giving robots the same rights as humans, we can ensure that they’ll continue developing along ethical lines because they’ll be held to consequence in the same manner as you and I.

    Robots are becoming more and more present in society. They advance by the day, and it won’t be long before they achieve sentience. We must ensure that these artificial beings are protected from harm because if not, who will protect them?

    Related: The Rise of AI Makes Emotional Intelligence More Important

    Christopher Massimine

    Source link

  • Conducting sample collection and diagnosis together in public health and medical settings through non-face-to-face methods

    Conducting sample collection and diagnosis together in public health and medical settings through non-face-to-face methods

    Newswise — A system has been developed that can quickly and precisely perform sample collection and diagnosis in public health and medical settings in light of new and variant infectious diseases, such as COVID-19.

    The Korea Institute of Machinery and Materials, an institution under the jurisdiction of the Ministry of Science and ICT (President Sang Jin Park, hereafter referred to KIMM), has developed the first integrated system in Korea that can collect specimens at the medical sites using a specimen collection robot in a non-face-to-face manner to prevent the spread of infectious diseases such as COVID-19, and can automatically complete high-speed molecular diagnosis in 40 minutes. This newly developed system is expected to lay the foundation for preventing the spread of new and variant infectious diseases in advance and strengthening the competitiveness of K-Bio diagnostics technology.

    To develop this system, the KIMM research team led by Dr. Dongkyu Lee, a principal researcher from the Department of Medical Devices at the Daegu Research Center for Medical Devices and Green Energy, and Dr. Joonho Seo, head of the Department of Medical Robotics (of the same Center), improved upon the sample collection technology of non-face-to-face samplings robot previously developed by KIMM. In addition to the existing sampling robot, by integrating rapid molecular diagnostic equipment, sample preparation technology of collected samples, and fast real-time PCR technology based on the rapid thermocycles, this system can now quickly and precisely conduct non-face-to-face procedures from sample collection to molecular diagnosis on site.

    The whole world experienced the collapse of the medical system due to the continuous outbreak of new and variant infectious diseases, such as monkeypox and COVID-19, over the past three years. In order to respond to new and variant infectious diseases that are highly contagious, rapid and precise molecular diagnosis is required in public health and medical settings. However, through traditional methods, it takes approximately 6 to 12 hours to complete the process of face-to-face sample collection, transfer, and molecular diagnosis.

    One problem with traditional molecular diagnostic equipment is that it takes 1 to 2 hours or more to obtain analysis results. To solve this problem, attempts have been made to utilize photothermal-based and microfluidics-based rapid thermocycle technology*. However, it is difficult to manufacture at low cost and quantitatively analyze in real time, thus limiting their on-site applications.

    * Rapid thermocycle technology: a technology that rapidly performs repeated heating and cooling cycles

    On the other hand, the KIMM research team’s newly developed rapid, automatic molecular diagnostic system, integrated with a sample collection robot, can obtain real-time PCR analysis results within 9 to 20 minutes at a speed 4.2 times faster than existing molecular diagnostic equipment. This is achieved by using a customized thermocycler that uses preset heating and cooling blocks in turn.

    The KIMM research team performed validation tests using this new system by conducting diagnosis of pathogenic bacteria and infectious coronavirus. From sample collection to molecular diagnosis, bacterial DNA analysis was completed within 25 minutes and coronavirus RNA within 40 minutes. The molecular diagnosis results obtained using this new system were similar to those obtained when using commercial molecular diagnosis equipment.

    KIMM’s newly developed system is a non-face-to-face system that applies automatic diagnostic technology throughout the entire process of sample dispensing, sample preparation processing, and rapid molecular diagnostics after sample collection, so that even unskilled users can quickly conduct diagnostics on site. When used in public health and medical settings such as screening clinics, airports, and emergency environments, the spread of new and variant infectious diseases can be quickly and accurately prevented in advance.

    Dr. Dongkyu Lee said, “The rapid, automatic molecular diagnosis system integrated with a non-face-to-face sample collection robot will prevent the continuous spread of new and mutated infectious diseases, while also protecting medical staff and the health of the general public.” He added, “KIMM will work with medical institutions and industries to globalize K-Bio technology, prevent the spread of new and mutated infectious diseases, and strive for R&D efforts with the goal of protecting the healthy lives of everyone in Korea.”

    ###

     

    The Korea Institute of Machinery and Materials (KIMM) is a non-profit government-funded research institute under the Ministry of Science and ICT. Since its foundation in 1976, KIMM is contributing to economic growth of the nation by performing R&D on key technologies in machinery and materials, conducting reliability test evaluation, and commercializing the developed products and technologies.

    These research efforts were carried out by KIMM and Biot Korea, Inc., with the support of the Korean Health Industry Development Institute as part of the “Development of a rapid, automatic molecular diagnostic field-type system and POC (proof-of concept) verification with a sample collection robot” project.

    National Research Council of Science and Technology

    Source link

  • Miso Robotics’ Global Expansion Aims to Provide a 17x Bigger Opportunity for Investors

    Miso Robotics’ Global Expansion Aims to Provide a 17x Bigger Opportunity for Investors

    Disclosure: Our goal is to feature products and services that we think you’ll find interesting and useful. If you purchase them, Entrepreneur may get a small share of the revenue from the sale from our commerce partners.

    Companies love robots working alongside humans. They don’t take days off and are incredibly reliable. That’s why, in a restaurant industry plagued by labor shortages, kitchen automation solutions from Miso Robotics have been gaining a ton of traction.


    Miso

    Miso Robotics

    After successfully automating kitchen operations for major U.S. fast food brands, Miso is sending its robotic assistants to the international market and allowing investors a chance to join them.

    Here’s why Miso may truly hold the key to the future of fast food.

    Miso helps make restaurants more efficient.

    From low wages to hot grease, people have found plenty of reasons not to work in fast-food kitchens. As a result, 500,000 new fast-food jobs go unfilled each month, leaving many brands in desperate need of automation solutions.

    That’s why Miso designed robots to cook food, pour drinks, and perform other repetitive tasks that humans prefer to avoid. For example, Miso’s Flippy 2 robot can fry, its Sippy robot pours drinks, and its Flippy Lite robot can fry and season items, most recently used by partners to make tortilla chips.

    All of these robots improve efficiency over time thanks to machine learning. And as a result, restaurant staff have more time to focus on customer-oriented service, knowing Miso’s bots deliver consistent quality.

    What’s more, Miso’s tech also addresses the fast-food industry’s longstanding tradition of low profit (average 5% margin) and rapid labor turnover, which have contributed to many restaurants’ lack of consistency and quality.

    With Miso, these are problems of the past. Its robots provide restaurants with a low-cost, user-friendly way to boost efficiency and have shown the potential to increase restaurant profit margins threefold.

    And thanks to the Robot-as-a-Service (RaaS) model, restaurants only pay a monthly fee for Miso’s tech, allowing them to see a positive return on the first day of operations.

    It’s no surprise that so many restaurants have already partnered with Miso, but this is just the beginning.

    Miso’s world tour.

    Many of fast food’s top brands have already adopted Miso’s AI-powered automation solutions. White Castle, Jack in the Box, Buffalo Wild Wings, and Caliburger are among many beloved restaurants that already have Flippys and Sippys lowering costs and boosting efficiency.

    But the opportunity for Miso to expand its footprint is even bigger abroad. Take Europe, for example, where brands spend up to 50 percent more trying to fill the labor gaps.

    That’s exactly why Miso’s landed a new international partnership that they expect will play a huge role in the company’s expansion to the 20-million-restaurant global marketplace — a 17 times larger opportunity than in the U.S. alone.

    With several top fast-food restaurants stateside and a global house of brands on the horizon, Miso’s believes it has proven that there’s a universal need for its automation solutions.

    Get in on Miso’s as it plans a global expansion.

    More than 20,000 investors have already realized Miso’s status as an early mover, giving Miso the chance to build a solid foundation and partner with America’s most formidable fast-food brands. Now, they are going global and raising additional funds to further innovation in a market where demand is even stronger than when they started.

    With a mission for global dominance up next, there will never be a better time to become a Miso shareholder than today. Learn more about Miso Robotics and how you can benefit as an investor.

    The opportunity to invest ends 11/18/2022.

    Miso Robotics is offering securities through the use of an Offering Statement that has been qualified by the Securities and Exchange Commission under Tier II of Regulation A. A copy of the Final Offering Circular that forms a part of the Offering Statement may be obtained from: Miso Robotics

    Entrepreneur may receive monetary compensation by the issuer, or its agency, for publicizing the offering of the issuer’s securities. Entrepreneur and the issuer of this offering make no promises, representations, warranties, or guarantees that any of the services will result in a profit or will not result in a loss.

    StackCommerce

    Source link

  • Germany’s Scholz: The way we deal with China must change

    Germany’s Scholz: The way we deal with China must change

    Press play to listen to this article

    BERLIN — Berlin must change the way it deals with China as the country lurches back toward a more openly “Marxist-Leninist” political trajectory, German Chancellor Olaf Scholz wrote in an op-ed on Thursday.

    In his article for POLITICO and the German newspaper Frankfurter Allgemeine Zeitung, Scholz defended his trip to China on Thursday but stressed that German companies would need to take steps to reduce “risky dependencies” in industrial supply chains, particularly in terms of “cutting-edge technologies.” Scholz noted that President Xi Jinping was deliberately pursuing a political strategy of making international companies reliant on China.

    “The outcome of the Communist Party Congress that has just ended is unambiguous: Avowals of Marxism-Leninism take up a much broader space than in the conclusions of previous congresses … As China changes, the way that we deal with China must change, too,” Scholz wrote.

    Germany has faced withering criticism for pressuring Europe into a strategically disastrous dependence on Russian gas over recent years, and Berlin is now having to hit back against suggestions that it is making exactly the same mistakes by depending on China as a manufacturing base and commercial partner.

    While Scholz signaled a note of caution over China, he was far from suggesting that Germany was close to a major U-turn in its largely cozy relations with China. Indeed, he clearly echoed his predecessor Angela Merkel in insisting that the (unnamed but obviously identified) United States should not drag Germany into a new Cold War against Beijing.

    “Germany of all countries, which had such a painful experience of division during the Cold War, has no interest in seeing new blocs emerge in the world,” he wrote. “What this means with regard to China is that of course this country with its 1.4 billion inhabitants and its economic power will play a key role on the world stage in the future — as it has for long periods throughout history.”

    In a thinly veiled criticism of Washington’s policies, Scholz said Beijing’s rise did not justify “the calls by some to isolate China.”

    Crucially, he insisted that the goal was not to “decouple” — or break manufacturing ties — from China. He added, however, that he was taking “seriously” an assertion by President Xi that Beijing’s goal was to “tighten international production chains’ dependence on China.”

    Scholz is planning to fly to Beijing late on Thursday for a one-day trip to the Chinese capital on Friday, where he will be the first Western leader to meet Xi since his reappointment, and the first leader from the G7 group of leading economies to visit China since the outbreak of the coronavirus pandemic.

    The chancellor also sought to counter criticism that his trip undermines a joint European approach to China. According to French officials, President Emmanuel Macron had proposed that he and Scholz should visit Xi together to demonstrate unity and show that Beijing cannot divide European countries by playing their economic interests off against each other — an initiative that the German leader rejected.

    “German policy on China can only be successful when it is embedded in European policy on China,” Scholz wrote. “In the run-up to my visit, we have therefore liaised closely with our European partners, including President Macron, and also with our transatlantic friends.”

    Chancellor Olaf Scholz echoed his predecessor Angela Merkel in insisting that the United States should not drag Germany into a new Cold War against Beijing | Clemens Bilan-Pool/Getty Images

    Scholz said he wanted Germany and the EU to cooperate with a rising China — including on the important issue of climate change — rather than trying to box it out.

    At the same time, he warned Beijing that it should not pursue policies striving for “hegemonic Chinese dominance or even a Sinocentric world order.”

    Scholz also pushed China to stop its support for Russia’s war against Ukraine and to take a more critical position toward Moscow: “As a permanent member of the [United Nations] Security Council, China bears a special responsibility,” he wrote. “Clear words addressed from Beijing to Moscow are important — to ensure that the Charter of the United Nations and its principles are upheld.”

    This article is part of POLITICO Pro

    The one-stop-shop solution for policy professionals fusing the depth of POLITICO journalism with the power of technology


    Exclusive, breaking scoops and insights


    Customized policy intelligence platform


    A high-level public affairs network

    Hans von der Burchard

    Source link

  • Tracking Trust in Human-Robot Work Interactions

    Tracking Trust in Human-Robot Work Interactions

    Newswise — The future of work is here.

    As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot behavior are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.

    Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot Interactions in safety-critical work domains funded by the National Science Foundation (NSF).

    “While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”

    Mehta’s latest NSF-funded work, recently published in Human Factors: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the brain-behavior relationships of why and how an operator’s trusting behaviors are influenced by both human and robot factors.

    Mehta also has another publication in the journal Applied Ergonomics that investigates these human and robot factors.

    Using functional near-infrared spectroscopy, Mehta’s lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots. That distrust was associated with increased activation of regions in the frontal, motor and visual cortices, indicating increasing workload and heightened situational awareness. Interestingly, the same distrusting behavior was associated with the decoupling of these brain regions working together, which otherwise were well connected when the robot behaved reliably. Mehta said this decoupling was greater at higher robot autonomy levels, indicating that neural signatures of trust are influenced by the dynamics of human-autonomy teaming.

    “What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”

    Dr. Sarah Hopko ’19, lead author on both papers and recent industrial engineering doctoral student, said neural responses and perceptions of trust are both symptoms of trusting and distrusting behaviors and relay distinct information on how trust builds, breaches and repairs with different robot behaviors. She emphasized the strengths of multimodal trust metrics — neural activity, eye tracking, behavioral analysis, etc. — can reveal new perspectives that subjective responses alone cannot offer.

    The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and taskwork in safety-critical environments. Mehta said the long-term goal is not to replace humans with autonomous robots but to support them by developing trust-aware autonomy agents.

    “This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta said.

    Texas A&M University

    Source link

  • New Flexible, Steerable Device Placed in Live Brains by Minimally Invasive Robot

    New Flexible, Steerable Device Placed in Live Brains by Minimally Invasive Robot

    Newswise — The early-stage research tested the delivery and safety of the new implantable catheter design in two sheep to determine its potential for use in diagnosing and treating diseases in the brain.  

    If proven effective and safe for use in people, the platform could simplify and reduce the risks associated with diagnosing and treating disease in the deep, delicate recesses of the brain.   

    It could help surgeons to see deeper into the brain to diagnose disease, deliver treatment like drugs and laser ablation more precisely to tumours, and better deploy electrodes for deep brain stimulation in conditions such as Parkinson’s and epilepsy.  

    Senior author Professor Ferdinando Rodriguez y Baena, of Imperial’s Department of Mechanical Engineering, led the European effort and said: “The brain is a fragile, complex web of tightly packed nerve cells that each have their part to play. When disease arises, we want to be able to navigate this delicate environment to precisely target those areas without harming healthy cells.  

    “Our new precise, minimally invasive platform improves on currently available technology and could enhance our ability to safely and effectively diagnose and treat diseases in people, if proven to be safe and effective.” 

    Developed as part of the Enhanced Delivery Ecosystem for Neurosurgery in 2020 (EDEN2020) project, the findings are published in PLOS ONE. 

    Stealth Surgery  

    The platform improves on existing minimally invasive, or ‘keyhole’, surgery, where surgeons deploy tiny cameras and catheters through small incisions in the body.   

    It includes a soft, flexible catheter to avoid damaging brain tissue while delivering treatment, and an artificial intelligence (AI)-enabled robotic arm to help surgeons navigate the catheter through brain tissue.   

    Inspired by the organs used by parasitic wasps to stealthily lay eggs in tree bark, the catheter consists of four interlocking segments that slide over one another to allow for flexible navigation. 

    It connects to a robotic platform that combines human input and machine learning to carefully steer the catheter to the disease site. Surgeons then deliver optical fibres via the catheter so they can see and navigate the tip along brain tissue via joystick control. 

    The AI platform learns from the surgeon’s input and contact forces within brain tissues to guide the catheter with pinpoint accuracy. 

    Compared to traditional ‘open’ surgical techniques, the new approach could eventually help to reduce tissue damage during surgery, and improve patient recovery times and length of post-operative hospital stays. 

    While performing minimally invasive surgery on the brain, surgeons use deeply penetrating catheters to diagnose and treat disease. However, currently used catheters are rigid and difficult to place precisely without the aid of robotic navigational tools. The inflexibility of the catheters combined with the intricate, delicate structure of the brain means catheters can be difficult to place precisely, which brings risks to this type of surgery.   

    To test their platform, the researchers deployed the catheter in the brains of two live sheep at the University of Milan’s Veterinary Medicine Campus. The sheep were given pain relief and monitored for 24 hours a day over a week for signs of pain or distress before being euthanised so that researchers could examine the structural impact of the catheter on brain tissue.  

    They found no signs of suffering, tissue damage, or infection following catheter implantation.   

    Lead author Dr Riccardo Secoli, also from Imperial’s Department of Mechanical Engineering, said: “Our analysis showed that we implanted these new catheters safely, without damage, infection, or suffering. If we achieve equally promising results in humans, we hope we may be able to see this platform in the clinic within four years.   

    “Our findings could have major implications for minimally invasive, robotically delivered brain surgery. We hope it will help to improve the safety and effectiveness of current neurosurgical procedures where precise deployment of treatment and diagnostic systems is required, for instance in the context of localised gene therapy.”  

    Professor Lorenzo Bello, study co-author from the University of Milan, said: “One of the key limitations of current MIS is that if you want to get to a deep-seated site through a burr hole in the skull, you are constrained to a straight-line trajectory. The limitation of the rigid catheter is its accuracy within the shifting tissues of the brain, and the tissue deformation it can cause. We have now found that our steerable catheter can overcome most of these limitations.” 

    This study was funded by the EU Horizon 2020 programme.  

    Modular robotic platform for precision neurosurgery with a bio-inspired needle: system overview and first in-vivo deployment” by Riccardo Secoli, Eloise Matheson, Marlene Pinzi, Stefano Galvan, Abdulhamit Donder, Thomas Watts, Marco Riva, Davide Zani, Lorenzo Bello, and Ferdinando Rodriguez y Baena. Published 19 October 2022 in PLOS ONE. 

    Imperial College London

    Source link

  • Teens tackle 21st-century challenges at robotics contest

    Teens tackle 21st-century challenges at robotics contest

    GENEVA (AP) — For their first trip to a celebrated robotics contest for high school students from scores of countries, a team of Ukrainian teens had a problem.

    With shipments of goods to Ukraine uncertain, and Ukrainian customs officers careful about incoming merchandise, the group only received a base kit of gadgetry on the day they were set to leave for the event in Geneva.

    That set off a mad scramble to assemble their robot for the latest edition of the “First Global” contest, a three-day affair that opened Friday, in-person for the first time since the pandemic. Nearly all the 180-odd teams, from countries across the world, had had months to prepare their robots.

    “We couldn’t back down because we were really determined to compete here and to give our country a good result — because it really needs it right now,” said Danylo Gladkyi, a member of Ukraine’s team. He and his teammates are too young to be eligible for Ukraine’s national call-up of all men over 18 to take part in the war effort.

    Gladkyi said an international package delivery company wasn’t delivering into Ukraine, and reliance on a smaller private company to ship the kit from Poland into Ukraine got tangled up with customs officials. That logjam got cleared last Sunday, forcing the team to dash to get their robot ready with adaptations they had planned — only days before the contest began.

    The event, launched in 2017 with backing from American innovator Dean Kamen, encourages young people from all corners of the globe to put their technical smarts and mechanical knowhow to challenges that represent symbolic solutions to global problems.

    This year’s theme is carbon capture, a nascent technology in which excess heat-trapping CO2 in the atmosphere is sucked out of the skies and sequestered, often underground, to help fight global warming.

    Teams use game controllers like those attached to consoles in millions of households worldwide to direct their self-designed robots to zip around pits, or “fields,” to scoop up hollow plastic balls with holes in them that symbolically represent carbon. Each round starts by emptying a clear rectangular box filled with the balls into the field, prompting a whirring, hissing scramble to pick them up.

    The initial goal is to fill a tower topped by a funnel in the center of the field with as many balls as possible. Teams can do that in one of two ways: either by directing the robots to feed the balls into corner pockets, where team members can pluck them out and toss them by hand into the funnel or by having the robots catapult the balls up into the funnels themselves.

    Every team has an interest in filling the funnel: the more collected, the more everyone benefits.

    But in the final 30 seconds of each session, after the frenetic quest to collect the balls, a second, cutthroat challenge awaits: Along the stem of each tower are short branches, or bars, at varying levels that the teams — choosing the mechanism of their choice such as hooks, winches or extendable arms — try to direct their robots to ascend.

    The higher the level reached, the greater the “multiplier” of the total point value of the balls they will receive. Success is getting as high as possible, and with six teams on the field, it’s a dash for the highest perch.

    By meshing competition with common interest, the “First Global” initiative aims to offer a tonic to a troubled world, where kids look past politics to help solve problems that face everybody.

    The opening-day ceremony had an Olympic vibe, with teams parading in behind their national flags, and short bars of national anthems playing, but the young people made it clear this was about a new kind of global high school sport, in an industrial domain that promises to leave a large footprint in the 21st century.

    The competition takes many minds off troubles in the world, from Russia’s invasion of Ukraine to the fallout from Syria’s lingering war, to famine in the Horn of Africa, and recent upheaval in Iran.

    While most of the world’s countries were taking part, some were not: Russia, in particular, has been left out.

    Past winners of such robotics competitions include “Team Hope” — refugees and stateless others — and a team of Afghan girls.

    Source link

  • CROP ROBOTICS 2022, BEYOND THE VALLEY OF DEATH

    CROP ROBOTICS 2022, BEYOND THE VALLEY OF DEATH

    Are we finally starting to see the adoption of labor-saving robots in agriculture? The short and unfulfilling summary answer is “It depends”. Undeniably, we are seeing clear signs of progress yet, simultaneously, we see clear signs of more progress needed. (Hi-res copy of the landscape.)

    Earlier this year, Western Growers Association produced an excellent report that outlined the need for robotics in agriculture. Ongoing labor challenges are, of course, a major driver, but so are rising costs, future demand, climate change impacts, and sustainability, among others. The use of robotics in agricultural production is the next progression of decades of increasing mechanization and automation to enhance crop production. Today’s crop robotics can build upon these preceding solutions and leverage newer technologies like precise navigation, vision and other sensor systems, connectivity and interoperability protocols, deep learning and artificial intelligence to address farmers’ current and future challenges.

    So What is a Crop Robot?

    We at The Mixing Bowl and Better Food Ventures create various market landscape maps that capture the use of technology in our food system. Our intent in producing these landscapes is to not only represent where a technology’s adoption is today, but, more importantly, where it is heading. So, as we developed this 2022 Crop Robotics Landscape, our frame of reference was to look beyond mechanization and defined automation to more autonomous crop robotics. This focus on “robotics” perhaps created the hardest challenge for us—defining a “Crop Robot”.

    According to the definition of the Oxford English Dictionary, “A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically.” Putting agriculture aside for a moment, that definition means that a dishwasher, washing machine, or a thermostat controlling an air conditioner could all be considered robots, not things that evoke “robot” to most people. When asking “What is a Crop Robot” in our interviews for this analysis, the theme of “labor savings” came through strongly. Must a crop robot be a labor reducing tool? This is where our definition of a crop robot started us down the “It depends” path?

    • If a machine is only sensing or gathering data, is it saving labor enough to be considering a robot?
    • If a machine does not have a fully autonomous mobility system to move around—perhaps just an implement pulled by a standard tractor—is it a robot?
    • If a machine is solely an autonomous mobility system not designed for any specific labor-saving agriculture task, is it a robot?
    • If the machine is an unmanned aerial vehicle (UAV)/aerial drone, is it a robot? Does the answer change if there are a fleet of drones coordinating amongst themselves the spraying of a field?

    Eventually, for the purposes of this robotic landscape analysis, we focused on machines that use hardware and software to perceive surroundings, analyze data and take real-time action on information related to an agricultural crop-related function without human intervention.

    This definition focuses on characteristics that enable autonomous, not deterministic, actions. In many instances repetitive or constrained automation can get a task completed in an efficient and cost effective manner. Much of the existing and indispensable agricultural machinery and automation used on farms today would fit that description. However, we wanted to look specifically at robotic technologies that can take more unplanned, appropriate and timely action in the dynamic, unpredictable, and unstructured environments that exist in agricultural production. That translates to more precision, more dexterity and more autonomy.

    The Crop Robotics Landscape

    Our 2022 Crop Robotics Landscape includes nearly 250 companies developing crop robotic systems today. The robots are a mix: some that are self-propelled and some that aren’t, some that can navigate autonomously and those that can’t, some that are precise and some that are not, both ground-based and air-based systems, and those focused on indoor or outdoor production. In general, the systems need to offer autonomous navigation or vision-aided precision or a combination to be included on the landscape. These included areas are highlighted in gold in the chart below. The white areas are not autonomous or not complete robotic systems and are not included on the landscape.

    The landscape is limited to robotic solutions utilized in the production of food crops; it does not include robotics for animal farming nor for the production of cannabis. Pre-production nursery and post-harvest segments are also excluded (but note that highly automated solutions for these tasks are commercially available today). Likewise, sensor-only and analytic offerings are also not included, unless they are part of a complete robotic system.

    Additionally, we only included companies that are providing their robotic systems commercially to others. If they develop robotics only for their own internal use or only offer services then they are not included, nor are academic or consortium research projects unless they appear to be heading to a commercial offering. Product companies should have reached at least the demonstrable-prototype stage in their development. Finally, companies appear only once on the landscape, even though some may offer multiple or multi-use robotic solutions. They are also placed according to their most sophisticated or primary function.

    The landscape is segmented vertically by crop production system: broadacre row crops, field-grown specialty, orchard and vineyard, and indoor. The landscape is also segmented horizontally by functional area: autonomous movement, crop management, and harvest. Within those functional areas are the more specific task/product segments described here:

    Autonomous Movement

    Navigation/Autonomy – more sophisticated autosteer systems with headland turning capability and autonomous navigation systems

    Small Tractor/Platform – smaller, people size autonomous tractors and carriers

    Large Tractor – larger autonomous tractors and carriers

    Indoor Platform – smaller autonomous carriers specifically for indoor farms

    Crop Management

    Scouting and Indoor Scouting – autonomous mapping and scouting robots and aerial drones; note that robots appearing in other task/product categories may have scouting capabilities in addition to their primary function

    Preparation & Planting – autonomous field preparation and planting robots

    Drone Application – spraying and spreading aerial drones

    Indoor Drone Protection – indoor crop protection aerial drones

    Application and Indoor Application – autonomous and/or vision-guided application including vision-based precision control systems

    Weeding, Thinning & Pruning – autonomous and/or vision-guided weeding, thinning and pruning, including vision-based precision control systems

    Indoor Deleafing – autonomous indoor vine-crop deleafing robots

    Harvest

    Harvesting – crop sector-specific autonomous and/or precision harvest robotics

    Some of the task/product segments, like Large Tractor, span multiple crop systems, as the robotic solutions within them may be applicable to more than one crop type. Logo positions within these landscape boxes are not necessarily indicative of crop system applicability.

    The diversity of offerings appearing on the landscape is perhaps the biggest takeaway; crop robotics is a very active sector across tasks and crops types. In the Autonomous Movement area, although autosteer has been in wide use for many years, more robust autonomous navigation technology and fully autonomous tractors and smaller multi-use motive platforms are just entering the market. In Crop Management there is a mix of self-propelled and trailed and attached implements. Vision-aided precision crop care tasks like spot spraying and weeding are areas of heavy development activity, particularly for the less automated specialty crop sector. Finally, high-value, high-labor crops like strawberries, fresh-market tomatoes, and orchard fruit are the focus for many robotic harvesting initiatives. As noted, there is a lot of activity; however, successful commercialization is more rare.

    Traversing the Valley of Death to Achieve Scale

    The Government of the United Kingdom recently released a report that reviews Automation in Horticulture. In the report they include the automation lifecycle analysis graphic shown below that they refer to as “Technology Readiness Levels in Horticulture”. If we were to map the more than 600 companies we researched in our analysis, well over 90 percent of these companies would still be labeled in the “Research” or “System Development” phases. Historically, many agriculture robotics companies have failed to succeed, perishing in the “Valley of Death”. Only a handful of companies have reached “Commercialization”, a phase where companies attempt to traverse the perilous journey from product success to business success and profitability.

    There are many reasons why ag robotics has had a high failure rate in reaching commercial scale. At its core, it has been very difficult to provide a reliable machine capable of providing value to a farmer on par with a non-robotic or manual solution at a cost effective price point.

    Amongst the technical challenges crop robotics companies face are:

    1. Design: In the early days a company may want to vary its product design to try new things. But at some point as it begins to scale, it needs to lock in standardization to the degree possible. Updating deployed systems remains a continuous challenge.
    2. Manufacturing: Maturing companies move from custom to standardized manufacturing. One company we spoke with had gone from building machines itself, to just building a base and then having vendors doing sub-assembly. Now they have gotten to a point of maturation that not a single team member touches a wrench as all manufacturing is done by partners.
    3. Reliability: A metric commonly used is hours of uninterrupted operation, and scaling requires going from “faults per mile” to “miles per fault”. The ability to handle the adverse and unpredictable conditions of agricultural production exacerbates the difficulty in creating a reliable machine. As an example, one person told about the unforeseen challenge of working in vineyards where the acid from grape juice accelerates equipment deterioration.
    4. Operation: At some point in the scaling process, farm staff will operate the machine without the presence of robotic solution provider support staff. At this point, there are often knowledge gaps on how to effectively operate the machine that need to be resolved. A step in scaling is getting farm staff trained to operate the machines themselves.
    5. Service: Another metric we heard was about decreasing service support resource requirements: How could a robotics company switch from having X number of people support a single unit to having a single person support Y number of different units?

    A last technical facet of scaling is the ease with which a platform can be modified to serve multiple crops or multiple tasks. The space is still so early that we don’t have that many data points about repurposing technology for multiple crops/tasks. However, it is something many companies are obviously looking to prove to upsell customers or convince investors they have the potential to serve a larger market.

    We heard from numerous crop robotic startups and investors that the technology challenges need to be tackled first, then the economic and business challenges can be addressed. The reality, of course, is that a successful crop robotic solution developer must face several challenges simultaneously: sustaining a business while refining product-market fit to get paying customers; refining product-market fit while sustaining the interest of investors; and sustaining the engagement of farmer customers.

    On the business side, we tried to identify when a company could claim it had made it through the “Valley of Death”. One group we spoke with very simply said there were three key business questions to ask:

    1. Can we sell it?
    2. Does demand outstrip supply?
    3. Do the unit economics work out for all parties?

    The answer to the question of “Can we sell it?” usually equated to when and if the robot could perform the task on par with a human—a comparable performance for a comparable cost. That performance clearly varies by crop and task. As an example, there was a generally shared sense that “picking” was the most difficult task to achieve on par with the time, accuracy and cost of a human.

    One thread that came up in our conversations is that many farmers may not yet see the longer-term potential of what robots can do in agriculture. They look at (and value) them merely as a way to replace the tasks a human does—but do not look at what more efficient approaches beyond the capabilities of humans that could be enabled with these powerful platforms.

    In our discussions we probed on whether the business model of a crop robotics company made a substantial difference in whether they could sell it. Responses were wide-ranging as to whether there is a benefit to having a “Robotics as a Service” (RaaS) model versus a machine buy/lease model. Our net conclusion regarding business models is that, while it may be advantageous to offer “Robotics-as-a-Service” (RaaS) in the early stages of a company’s development, over the longer run companies should plan to operate under both a buy/lease and a RaaS model. The advantages of RaaS in the early days are that they 1) allow a farmer to “try before you buy” which lowers the complexity and cost, and, thus, lowers the barrier to adoption and 2) offer a startup to work more closely with farmers to understand problems and identify potential new challenges to solve.

    Many startups have “hyped” their solutions too early, before they could conquer the many complexities involved with successfully operating in the market. This “hype” has caused many farmers to be leery of crop robotics in general. Farmers just want (and need) things to work and many may have been burned in the past by adopting technologies that were not fully mature. As one startup said, “It is hard to get them to understand the iterative process”. Still, farmers are also known as problem solvers and many continue to engage with startups to help mature solutions.

    Of course, the “Can we sell it?” question should really be extended to “Can we sell and support it?”. An interesting point to watch between incumbents and new solution providers will be the scaling of startups and the resulting need for those companies to have a cost-effective sales and service channel. Incumbent vendors, of course, have those channels, and John Deere and GUSS Automation have announced just such a partnership.

    Like farmers, investors also walk hand-in-hand with a robotics startup crossing the Valley of Death. Investor sentiment toward agriculture robotics is mixed. On the one hand, there is an acknowledgement that there have not been notable exits of profitable startups in this space (as opposed to those just having desirable technology). On the other hand, there is a recognition that agriculture’s labor issues are becoming more acute and large potential markets could be realized this time around. Investors also see that the quality of the technology and startup teams have improved in the last few years.

    It is encouraging to see more investors looking at the space than a few years ago, writing bigger checks in later rounds, and investing at high valuations. Investors also understand the challenges better than before so that they can differentiate between segments developers are targeting, e.g., the difficulty of harvesting in an open field versus scouting in a greenhouse.

    What Gives us Optimism Crop Robotics is Making Progress?

    So, given the above, why do we feel optimistic that crop robotics is making healthy progress? For a number of reasons, the Valley of Death may not be as wide nor as fatal as it has been in the past for companies in this space.

    Beyond the growing need for labor-saving solutions in agriculture, we are optimistic that crop robotics is making progress simply because of the underlying technology progress that has occurred in the last decade or so. Again and again in the interviews we conducted, we heard phrases similar to “this would not have been possible a decade ago”. Someone flat out stated that a few years ago “The machines weren’t ready” for the conditions of farming. Large scale improvements in core compute technology, accessibility and performance of computer vision systems, deep learning capabilities, and even automated mobility systems have come a long way in the last ten years.

    In addition to the improved technology base, there is more seasoned talent than a decade ago and that talent brings a range of experiences from across the robotics landscape, including insight into scaling to success. In this regard, crop robotics can leverage the broader, better-funded robotics spaces of self-driving vehicles and warehouse automation. Equally important, most of the teams that are seeing success employ a combination of robotics experts and farm experts. Past ag robotics teams may have had the technological prowess to develop a solution but may not have understood the ag market or the realities of farming environments.

    We are also optimistic because the depth and breadth of crop robotic solutions is expanding, as illustrated by the number of companies represented on our landscape. Although large commodity row crop farms—like those of the Midwestern US—are already highly automated and have even adopted robotic autosteer systems en masse, a very clear indication of progress is that we are seeing a more diverse set of crop robotic solutions than in years past.

    For example, new robotic platforms are successfully undertaking labor-saving tasks that are of modest difficulty. Perhaps the best example of this is the GUSS autonomous sprayer that can work in orchards. The self-powered GUSS machine navigates autonomously and can adjust its spraying selectively based on its ultrasonic sensors. It has reached commercial scale. We are also starting to see more solutions targeting farmers who have been underserved by labor-saving automation solutions, such as smaller farm operations or niche specialty crop systems. Examples of this are Burro, Naio or farm-ng. Lastly, we are seeing the development of “smart implements”. By not taking on the burden of developing autonomous movement, these solutions can be pulled behind a tractor to focus on complex agriculture tasks like vision-guided selective weeding and spraying. Verdant, Farmwise and Carbon Robotics are examples of this kind of solution.

    One encouraging trend we are also watching is the role of incumbent agriculture equipment providers, particularly in specialty crops. John Deere (Blue River, Bear Flag Robotics) as well as Case New Holland (Raven Industries) have signaled a willingness to acquire companies in crop robotics to complement their ongoing internal R&D efforts. Yamaha and Toyota, through their venture funds, have also shown a desire to partner and invest in the space. The question remains to be seen if other incumbent equipment players have the willingness to invest in the assemblage of technology and talent required to bring robotic solutions to the marketplace.

    Looking Ahead

    The drivers for increased automation in agriculture are readily apparent and are likely to continue to increase over time. Thus, a large opportunity exists for robotic solutions that can help farmers mitigate their production challenges. That is, as long as those solutions perform well and at reasonable cost in the real world of commercial farm operations. As we observed while researching the landscape, there is an impressive number of companies focused on developing crop robotics solutions across a breadth of crop systems and tasks, and with more commercial focus than past projects. However, the market continues to feel early as companies continue to navigate the difficult process of creating and deploying robust solutions at scale for this challenging industry. Still, there is more room for optimism and more tangible progress being made now than ever before. The Crop Robotics “Valley of Death” that so many startups have failed to cross appears to be becoming less wide and ominous in great part due to the break-neck speed of technological progress. While a robotic revolution in crop production is likely still some time off, we are seeing a promising evolution and expect to see more successful crop robotic companies in the not too distant future.

    Acknowledgements

    We would like to thank the University of California Agriculture and Natural Resources and The Vine for their strong interest in crop robotics and their continued support of this project. Thank you to Simon Pearson, Director, Lincoln Institute for Agri-Food Technology and Professor of Agri-Food Technology, University of Lincoln in the UK for his insights and the use of the graphic from the Automation in Horticulture Review report. Thank you to Walt Duflock of Western Growers Association for sharing his detailed perspective on the ag robotics sector. Most importantly we would like to acknowledge all the start-ups and innovators who are working tirelessly to make crop robotics a much needed reality. A special thanks to those entrepreneurs and investors that spoke with us and provided a unique view into the challenges and excitement of a crop robotic business.

    Bios

    Chris Taylor is a Senior Consultant on The Mixing Bowl team and has spent more than 20 years on global IT strategy and development innovation in manufacturing, design and healthcare, focusing most recently on AgTech.

    Michael Rose is a Partner at The Mixing Bowl and Better Food Ventures where he brings more than 25 years immersed in new venture creation and innovation as an operating executive and investor across the Food Tech, AgTech, restaurant, Internet, and mobile sectors.

    Rob Trice founded The Mixing Bowl to connect food, agriculture and IT innovators for thought and action leadership and Better Food Ventures to invest in startups harnessing IT for positive impact in Agrifoodtech.

    Michael Rose, Contributor

    Source link

  • ‘Smart plastic’ material is step forward toward soft, flexible robotics and electronics

    ‘Smart plastic’ material is step forward toward soft, flexible robotics and electronics

    Newswise — Inspired by living things from trees to shellfish, researchers at The University of Texas at Austin set out to create a plastic much like many life forms that are hard and rigid in some places and soft and stretchy in others­. Their success — a first, using only light and a catalyst to change properties such as hardness and elasticity in molecules of the same type — has brought about a new material that is 10 times as tough as natural rubber and could lead to more flexible electronics and robotics.

    The findings are published today in the journal Science.

    “This is the first material of its type,” said Zachariah Page, assistant professor of chemistry and corresponding author on the paper. “The ability to control crystallization, and therefore the physical properties of the material, with the application of light is potentially transformative for wearable electronics or actuators in soft robotics.”

    Scientists have long sought to mimic the properties of living structures, like skin and muscle, with synthetic materials. In living organisms, structures often combine attributes such as strength and flexibility with ease. When using a mix of different synthetic materials to mimic these attributes, materials often fail, coming apart and ripping at the junctures between different materials.

    Oftentimes, when bringing materials together, particularly if they have very different mechanical properties, they want to come apart,” Page said. Page and his team were able to control and change the structure of a plastic-like material, using light to alter how firm or stretchy the material would be.

    Chemists started with a monomer, a small molecule that binds with others like it to form the building blocks for larger structures called polymers that were similar to the polymer found in the most commonly used plastic. After testing a dozen catalysts, they found one that, when added to their monomer and shown visible light, resulted in a semicrystalline polymer similar to those found in existing synthetic rubber. A harder and more rigid material was formed in the areas the light touched, while the unlit areas retained their soft, stretchy properties.

    Because the substance is made of one material with different properties, it was stronger and could be stretched farther than most mixed materials.

    The reaction takes place at room temperature, the monomer and catalyst are commercially available, and researchers used inexpensive blue LEDs as the light source in the experiment. The reaction also takes less than an hour and minimizes use of any hazardous waste, which makes the process rapid, inexpensive, energy efficient and environmentally benign.

    The researchers will next seek to develop more objects with the material to continue to test its usability.

    “We are looking forward to exploring methods of applying this chemistry towards making 3D objects containing both hard and soft components,” said first author Adrian Rylski, a doctoral student at UT Austin.

    The team envisions the material could be used as a flexible foundation to anchor electronic components in medical devices or wearable tech. In robotics, strong and flexible materials are desirable to improve movement and durability.

    Henry L. Cater, Keldy S. Mason, Marshall J. Allen, Anthony J. Arrowood, Benny D. Freeman and Gabriel E. Sanoja of The University of Texas at Austin also contributed to the research.

    The research was funded by the National Science Foundation, the U.S. Department of Energy and the Robert A. Welch Foundation.

    University of Texas at Austin (UT Austin)

    Source link