Home Science This Robot Could Help Fulfill Your Online Shopping Sprees

This Robot Could Help Fulfill Your Online Shopping Sprees

This Robot Could Help Fulfill Your Online Shopping Sprees

Imagine for a moment that you have suction cups for fingertips—unless you’re currently on hallucinogens, in which case you should not imagine that. Each sucker is a different size and flexibility, making one fingertip ideal for sticking onto a flat surface like cardboard, another more suited to a round thing like a ball, another better for something more irregular, like a flower pot. On its own, each digit may be limited in which things it can handle. But together, they can work as a team to manipulate a range of objects.

This is the idea behind Ambi Robotics, a lab-grown startup that is today emerging from stealth mode with sorting robots and an operating system for running such manipulative machines. The company’s founders want to put robots to work in jobs that any rational machine should be terrified of: Picking up objects in warehouses. What comes so easily to people—grasping any object that isn’t too heavy—is actually a nightmare for robots. After decades of research in robotics labs across the world, the machines still have nowhere near our dexterity. But maybe what they need is suction cups for fingertips. 

Ambi Robotics grew out of a University of California, Berkeley research project called Dex-Net that models how robots should grip ordinary objects. Think of it as the robotics version of how computer scientists build image-recognition AI. To train machines to recognize, say, a cat, researchers have to first build a database of lots and lots of images that contain felines. In each, they’d draw a box around the cat to teach the neural network: Look, this here is a cat. Once the network had parsed a massive number of examples, it could then “generalize,” automatically recognizing a cat in a new image it had never seen before. 

Dex-Net works in the same way, but for robotic graspers. Working in a simulated space, scientists create 3D models of all kinds of objects, then calculate where a robot should touch each one to get a “robust” grip. For instance, on a ball you’d want the robot to grab around the equator, not try to pinch one of the poles. That sounds obvious, but robots need to learn these things from scratch. “In our case, the examples are not images, but actually 3D objects with robust grasp points on them,” says UC Berkeley roboticist Ken Goldberg, who developed Dex-Net and cofounded Ambi Robotics. “Then, when we fed that into the network, it had a similar effect, that it started generalizing to new objects.” Even if the robot had never seen a particular object before, it could call upon its training with a galaxy of other objects to calculate how best to grasp it. 

Consider the grotesque ceramic coffee mug you made in art class in elementary school. You may have chosen to shape it in an absurd way, but you more than likely remembered to give it a handle. When you handed it to your parents and they pretended to like it, they grasped it by the handle—they’d already seen their fair share of professionally manufactured coffee mugs, and so they already knew how to grip it. Ambi Robotics’ robot operating system, AmbiOS, is the equivalent of that prior experience, only for robots.

Image may contain: Construction Crane

“As humans, we’re able to really infer how to deal with that object, even though it’s unlike any mug that’s ever been made before,” says Stephen McKinley, cofounder of Ambi Robotics. “The system can reason about what the rest of that object looks like, to know that if you picked up on that part, you could reasonably assume that it’s a decent grasp.” 

This article is auto-generated by Algorithm Source: www.wired.com

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
Read Comments

Related Posts

0

Ad Blocker Detected!

Refresh