Robotic hand rotates objects using touch, not vision

A team of engineers from the University of California San Diego has developed an innovative approach to enable a robotic hand to rotate objects using only touch, without relying on vision. Inspired by how humans effortlessly handle objects without seeing them, the researchers built a robotic hand with 16 touch sensors attached to its palm and fingers. These low-cost touch sensors provide simple binary signals – touch or no touch – allowing the robotic hand to smoothly rotate various objects, including small toys, cans, fruits, and vegetables, without causing damage.

The new technique contrasts with traditional methods that use a few high-cost, high-resolution touch sensors on a small area of the robotic hand, primarily at the fingertips. Such approaches have limitations, including reduced sensing ability due to the small number of sensors, difficulties in simulating and using high-resolution touch sensors in real-world experiments, and reliance on vision.

In contrast, the team’s approach demonstrates that detailed texture information about an object is not necessary for the task. Instead, simple binary signals indicating contact with the sensors are sufficient for the robotic hand to perform in-hand rotation effectively. The large coverage of binary touch sensors provides the necessary 3D structure and orientation information about the object, enabling successful rotation without the need for vision.

To train the system, the researchers conducted simulations of a virtual robotic hand rotating diverse objects with irregular shapes. The system assessed which sensors on the hand were being touched by the object during rotation, as well as the current positions and actions of the hand’s joints. Based on this information, the system instructed the robotic hand on the appropriate joint movements.

Upon testing the real-life robotic hand with previously unseen objects, such as a tomato, pepper, can of peanut butter, and a toy rubber duck, the robotic hand successfully rotated the objects without stalling or losing its grip. While objects with complex shapes took longer to rotate, the robotic hand demonstrated its ability to rotate objects around different axes.

This groundbreaking approach opens up possibilities for robots to manipulate objects in the absence of visual cues, making it a valuable advancement in the field of robotics. The team presented their work at the 2023 Robotics: Science and Systems Conference.

Posted in

Aihub Team

Leave a Comment





SK Telecom outlines its plans with AI partners

SK Telecom outlines its plans with AI partners

Razer and ClearBot are using AI and robotics to clean the oceans

Razer and ClearBot are using AI and robotics to clean the oceans

NHS receives AI fund to improve healthcare efficiency

NHS receives AI fund to improve healthcare efficiency

National Robotarium pioneers AI and telepresence robotic tech for remote health consultations

National Robotarium pioneers AI and telepresence robotic tech for remote health consultations

IBM’s AI-powered Mayflower ship crosses the Atlantic

IBM’s AI-powered Mayflower ship crosses the Atlantic

Humans are still beating AIs at drone racing

Humans are still beating AIs at drone racing

How artificial intelligence is dividing the world of work

How artificial intelligence is dividing the world of work

Global push to regulate artificial intelligence

Global push to regulate artificial intelligence

Georgia State researchers design artificial vision device for microrobots

Georgia State researchers design artificial vision device for microrobots

European Parliament adopts AI Act position

European Parliament adopts AI Act position

Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA

Chinese AI chipmaker Horizon endeavours to raise $700M to rival NVIDIA

AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot

AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot

AI and Human-Computer Interaction: AI technologies for improving user interfaces, natural language interfaces, and gesture recognition.

AI and Data Privacy: Balancing AI advancements with privacy concerns and techniques for privacy-preserving AI.

AI and Virtual Assistants: AI-driven virtual assistants, chatbots, and voice assistants for personalized user interactions.

AI and Business Process Automation: AI-powered automation of repetitive tasks and decision-making in business processes.

AI and Social Media: AI algorithms for content recommendation, sentiment analysis, and social network analysis.

AI for Environmental Monitoring: AI applications in monitoring and protecting the environment, including wildlife tracking and climate modeling.

AI in Cybersecurity: AI systems for threat detection, anomaly detection, and intelligent security analysis.

AI in Gaming: The use of AI techniques in game development, character behavior, and procedural content generation.

AI in Autonomous Vehicles: AI technologies powering self-driving cars and intelligent transportation systems.

AI Ethics: Ethical considerations and guidelines for the responsible development and use of AI systems.

AI in Education: AI-based systems for personalized learning, adaptive assessments, and intelligent tutoring.

AI in Finance: The use of AI algorithms for fraud detection, risk assessment, trading, and portfolio management in the financial sector.

AI in Healthcare: Applications of AI in medical diagnosis, drug discovery, patient monitoring, and personalized medicine.

Robotics: The integration of AI and robotics, enabling machines to perform physical tasks autonomously.

Explainable AI: Techniques and methods for making AI systems more transparent and interpretable

Reinforcement Learning: AI agents that learn through trial and error by interacting with an environment

Computer Vision: AI systems capable of interpreting and understanding visual data.

Natural Language Processing: AI techniques for understanding and processing human language.