Apple's late to the car game and that's okay in AI News

Apple's late to the car game and that's okay
2 October 2015, 12:00 am

                    There have been Apple Car (or iCar) rumors since at least 2007. They usually involve the company teaming up with an automaker to design an iPod- or iPhone-ready vehicle.


Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

1 Comment | Started October 03, 2015, 05:01:39 AM


Friday Funny in General Chat

Share your jokes here to bring joy to the world  :)

831 Comments | Started February 13, 2009, 01:52:35 PM


A philosophical issue in General AI Discussion

I am not sure if it is as much of an issue as it is a question, but the topic sounded more attractive this way.

Anyhow, for a long time now i have been indecisive as of which path I should choose. In other words, I have been thinking as to which type of intelligence is more practical, plausable and expandable. There are basically two different categories in my mind when I think about this - an intelligent assistant much like the fictional JARVIS where the issue is that a classic approach to such a programming task would probably render a limited program which could never grow to a real A.I. as to my standard and definition.

The other category is somewhat less plausible and would require more thinking and philosophical work rather then practical experimentation. This type of intelligence best compares to the fictional android Data from Star Trek or his Marvel counter-part The Vision. It is a truly unlimited human-like intelligence and therefore highly uncontrollable as it can grow beyond our own frame of mind (something id be happy about).

I have been developing and sketching models for both types and have worked up many theories and concepts as to how they would function. Working for the assistant-type has been mostly dissatisfying as I can see more limits then possibilities in all related models. I have already made prototype programs and they have been successful however the main reason I stopped was because I realized how limited they are.
As far as the android-model goes, I have made several sketches and realized that a new hardware architecture would be required to bring such an intelligence to life. How do I go about designing such an intelligence? Well I examine my own behavior and sometimes I observe animals and all organic life as a whole to find common behaviors. I'm basically back-tracking intelligence to its most basic form and the root of its evolution, for that is where lies the key and seed. Also note that this type of intelligence can also be an assistant and take any form - much like "intelligence" can be found within a human, a bird, a fish or any other animal harboring a brain.

So to sum it up - I'm not sure what I'm asking of you guys. I am interested in your thoughts and would be happy to get a discussion going (after all I haven't chatted in a while with you fellas) and maybe get motivation to pursue a certain path.

16 Comments | Started September 29, 2015, 09:31:00 PM


192: Micro and nano robotics, with Brad Nelson in Robotics News

192: Micro and nano robotics, with Brad Nelson
2 October 2015, 9:16 pm

micro_nano_brad_nelsonLink to audio file (30:58). Transcript below.

In this episode, Audrow Nash speaks with Brad Nelson, Professor at ETH Zurich, about his research regarding micro and nano robotics. They discuss many of Nelson’s projects: retinal and heart surgery, crystal harvesting, and robots with simulated flagella for mobility.

The video below shows some of the research discussed during the interview.


Brad Nelson 

Brad Nelson has been the Professor of Robotics and Intelligent Systems at ETH Zürich since 2002, where his research focuses on microrobotics and nanorobotics. Fundamentally, he is interested in how to make tiny intelligent machines that are millimeters to nanometers in size.

He studied mechanical engineering at the University of Illinois and the University of Minnesota, worked as a computer vision researcher at Honeywell and a software engineer at Motorola, served as a United States Peace Corps Volunteer in Botswana, Africa, and then obtained a Ph.D. in Robotics from Carnegie Mellon University. He was an Assistant Professor at the University of Illinois at Chicago and an Associate Professor at the University of Minnesota before moving to ETH.



Audrow Nash: Hi, can you introduce yourself?

Brad Nelson: My name is Brad Nelson. I am a professor of robotics and intelligence systems at ETH Zurich. I’ve been there for almost 13 years now.

Audrow Nash: Can you give me the big picture goal of your research?

Brad Nelson: The focus of our research is micro and nano robotics. We’re trying to understand how to make machines as small as we can and, along the way, we develop fabrication techniques to manufacture them. We look at ways to make them move, we look at applications for them and try to put that all together. We try to move the technology from fundamental research into a spinoff, a clinical application or something we might license to another company.

Audrow Nash: Just how small are these machines?

Brad Nelson: We work on things that are millimeters down to nanometers in scale. One of the things we’ve worked on for several years now is a device we call an artificial bacterial flagella. It’s about the size of a bacteria, maybe 15 or 20 microns long. You could fit about three billion in a teaspoon. They’re very tiny, you put five of them head to head and they might be about the width of a hair.

These are the smallest things we can see with optical microscopes. But then we also do a lot of work on nano fabrication with nano wires and nano structures, where you can’t see them, even with optical microscopes, so we need electron microscopes. Some of these structures will go down to 30 nanometers in diameter. And then we have things we put in the eye that are perhaps a millimeter or two long.

Audrow Nash: What are some applications of those?

Brad Nelson: One thing we’ve been working on has been retinal surgery; ways we can deliver drugs to specific locations on the retina, to target diseases such as age related macular degeneration or diabetic retinopathy. We’ve also realized that some of the technologies are not just for moving micro robots, but can move small catheters and endoscopes that you might want to put into the body.

We also work on how we might guide, not just untethered autonomous machines but small endoscopes or catheters, into the heart or the brain.

Audrow Nash: Do you see applications outside of the medical field?

Brad Nelson: We do. Drug discovery is one. One of the methods for discovering drugs is to first develop models of proteins. In the human DNA, we know the atomic sequence of protein molecules but we still have to understand the structure.

In structural biology, one of the main methods used to determine the shape of these tiny little molecules is to isolate proteins and crystallize them – grow them into tiny crystals. We’ve developed some of our micro robotic technology to handle these tiny crystals and to automate the process of retrieving them from the “mother liquor” and place them on a loop. That device is then immediately freeze dried, with the protein crystal on it, and then put into a high-energy x-ray beam. The structural biologist can infer the shape from the x-ray diffraction pattern.

It’s a long process but nobody has been able to automate it, up to now, so we think we have a solution there.

Lately, we have also been thinking of ways we can use these devices to create a catalytic reaction to treat wastewater.

Audrow Nash: Catalytic reaction?

Brad Nelson: To photo catalyze pollutants, for instance, and take toxins that are in the environment and degrade them into harmless substances. The idea is to put micro robots into polluted water, to treat and clean it.

There are also sensing applications: we can put micro robots into an environment and look at what chemicals are there, what kinds of substances that we want to get rid of. So we can use them as sensors to monitor the environment.

Audrow Nash: To understand this a little bit better I would like to talk about a few of your projects. To begin with, controlling micro robots with five degrees of freedom, can you describe that?

Brad Nelson: One of the first challenges we have is how to make small things. There’s micro fabrication technology and we borrow from the semiconductor industry; to make small machines, it’s the same technology as for making computer chips.

The second problem is how to make these small machine move: what kind of technologies can you use? Is there some kind of chemical reaction? Can you use lasers to deliver energy? The community has kind of settled, recently, on the use of magnetic fields, so you generate fields externally with electromagnets.

Basically, you wrap a wire around some kind of an iron core and, as you generate the electric current around it, you can create weaker and stronger magnetic fields. By changing their strength, we can exert forces that work on these small devices if they’re made of magnetic material.

We first started doing this over a decade ago with just two degrees of freedom, making it move on a surface. Then we started working on three degrees, to make things move in the X to Y and Z coordinates and to rotate.

Then we turned to five degrees of freedom motion. We realized we were going to need several electromagnets for this. We did a lot of numerical simulations on a computer to understand where the best place to put these magnets was, and the best way to control them. We realized we needed at least eight electromagnets. We built a system we call the OctoMag, so that we can very precisely control the current in each magnet and move small things with it.

Audrow Nash: Why do you need eight electromagnets for five degrees of freedom?

Brad Nelson: That’s something that we never quite understood until just recently. One of my post docs is working with me on this. Looking at the math, he realized that we can actually prove mathematically that eight is the minimum. We knew people who built systems that only had six magnets and they were having issues.

Magnetic fields can generate a force or a torque on something. The torque is like a compass needle on a magnetic field; we always know it’s going to point to the north and align itself with the earth’s magnetic field. Just by taking our field and rotating it, we can get other things – the magnetic particles, structures and micro robots – to rotate too, to align with the field.

Forces are a little more complicated because they require a special gradient of the field. The field has to be getting stronger in a direction, so that the micro robot moves in that direction. It’s with this coupling that the math gets a little complicated; you see a dependency between how you generate the gradient and the magnetic field.

It means that, although with five degrees you might think you just need five electromagnets, you can’t generate obituary gradients and fields at the same time. You need to add more electromagnets, and the question is: what’s the minimum? For our problem, it happened to be eight. The theoretical results are about to appear online now. It’s fascinating. People have been trying to understand how fields are generated since the 1800s, the 1700s even, and we’re still discovering new things. I think that’s going to help spur us on to new ideas and ways of making small things move, which is really the ultimate goal of what we’re trying to do.

Audrow Nash: This technology has gone into your startups. In addition to using small robots for crystallography, can you talk a bit about having a viable business model in research for this? Did you begin the research with a plan to market it?

Brad Nelson: I became a professor in 1995, so 20 years ago. My goal then was just to do interesting research, write good, grand proposals to get funding and to publish papers that people wanted to read. I think we’ve been successful in that.

Then, along the way, we started to get interested in getting our technologies out into the real world and that is a tremendous challenge. It’s one thing to have a research result to publish and another to actually develop a technology that people want to pay money for and use, and to help improve peoples’ lives in some way.

As I’ve developed my research agenda over the past 20 years, I started to think more carefully about potential business plans that might make sense. In biomedical, it’s clear you need to have patents and to be aware of what the patent landscape is. Even now, with some of our Masters projects, we will often go and just look at what patents are out there in this area.

If somebody like Siemens or Phillips already has a patent in an area, it can block the technology. It doesn’t make sense to go down a route where somebody has the IP locked up and, even if you do develop the new technology, there’s no chance of getting it out. You start to think in terms of: can you own, or do you have the potential to own, some important intellectual property in the area and develop it. You also consider, as you’re working on things, what potential therapies might this benefit and does it make sense, from a business standpoint, to develop these therapies.

Audrow Nash: What do you mean by therapies?

Brad Nelson: Well, for instance, one of our startups is called Aeon Scientific. They took our technology and they’re starting to look at some uses for magnetic fields. They looked at a lot of different areas and one thing they realized is that, with the aging population, more and more people need a particular heart procedure to treat a condition called Atrial Fibrillation, where your heart doesn’t beat properly; the electric signals don’t propagate through the heart.

Audrow Nash: It’s too fast or too slow or irregular?

Brad Nelson: Too fast or too slow or it doesn’t synchronize properly and so electrophysiologists and cardiac surgeons have discovered that, if they do a little therapy within the heart, they can reset it. People see potential to help a lot of people, but that also means potential to make money. It takes a tremendous investment to get this technology into the clinic. To justify that investment, you’ve got to realize a return on it. For that therapy, it makes sense.

Another therapy we’ve thought about is bronchoscopy, putting some of these devices into the lungs, particularly in premature infants, and trying to see which portion of their lung might be collapsed. That’s a therapy we’re interested in. The business model isn’t so convincing there; people just don’t project the potential number of cases that would justify the investment. That’s one of the reasons you’ve got funding agencies to enable you to progress further.

Audrow Nash: You mean funding agencies that fund research that’s not as in demand?

Brad Nelson: They fund research where the business model may not be clear. Although there are benefits, business investors may not find it persuasive, but it’s certainly going to be able to treat patients. We try to balance the two. We’re always trying to weigh up the problems we’re working on, the ones that have the potential to make a big impact. There are a lot of ways to measure this. One of these is the business case.

Audrow Nash: Going back to steerable catheter design with Aeon Scientific. So, you insert a catheter and that goes in through the arm or the groin or the neck, correct?

Brad Nelson: It goes through the femoral artery, near the groin. It enters the right chamber and then you can move over into the left chamber. You can get into all four chambers with this.

Audrow Nash: It has a flexible end that you direct using similar principles to the five degrees of freedom of manipulating small devices?

Brad Nelson: Yes, so the end of the catheter – the last few centimeters – is very floppy. At the very end is a permanent magnet. We can control its position and its orientation in the heart very precisely. We use that to deliver radio frequency therapy to ablate, to create this transmural ablation to stop electric signals from passing though certain areas of the heart.

Audrow Nash: The permanent magnet that’s at the end of the catheter is much bigger than some of the previous work you’ve done. Does the control of these magnetic robots scale?

Brad Nelson: It tends to scale well, although that needs a bit of explanation.

Generally, as devices get larger, magnetic fields can scale more favorably. But as they get smaller, it’s different. Magnetic forces and torques are a function of the volume of the magnet. Whenever you have something that depends on volume, as it gets smaller that’s not good, because if something is a cubic meter and then goes down to 10 centimeters, the size is decreased by just a factor of 10, but the volume has gone down by a factor of 1000. You’re getting these incredible decreases in the forces you can generate, so then we have to rethink our strategies. That’s where we’re inspired by nature – bacteria, for instance. We look at why E. coli swim with a rotary motor and use a flagella.

Audrow Nash: Describe the flagella a bit.

Brad Nelson: When Howard Berg [a professor of physics and molecular and cellular biology at Harvard] was in Colorado in 1973, he discovered that some bacteria, like E. coli and salmonella, have this little flagella. It’s a hair-like structure. People would look at it under a microscope and they couldn’t tell exactly how it moved. What Howard discovered, which surprised a lot of people, was that there was actually a rotary motor on this bacteria. People were astounded by that discovery. It was the first example of rotational motion in nature.

Audrow Nash: So it moves forwards in a similar way as a screw into wood?

Brad Nelson: Yeah, it’s like a screw, so it rotates this hair, this flagella, into a kind of helix shape, like a coil, and it propels itself that way. There’s a lot of physics, a lot of biology about why this works so well at these scales, that’s fascinating. How did nature evolve this kind of a structure? That’s a wonderful question.

We looked at it and realized that we can’t build that motor. It’s about 45 nanometers in diameter and it’s got 30 or 40 parts that are made out of proteins. What we could make, using some recent developments in nanotechnology, is these tiny little screws. Then we realized, if we put little magnets on the screws and rotated that field, we could get them to turn just like the E. coli flagella do. So they started propelling themselves, moving in fluid. That was in about 2007. We were surprised how well it worked and, as we understood the physics and the mathematics in the fabrication issues behind it, it’s been a very thrilling, rewarding area of research.

We’re always discovering something new and we’ve recently published a couple of papers. In one of them we’ve put DNA on these structures and we’ve demonstrated that, as we swim them near cells, we can get the cells to take up the DNA and then do gene transfection and express these.

The DNA would be coded for a particular florescent protein, so then the cells would express it and we can watch the cells start to light up. It demonstrates that we can target individual cells and do gene transfection with them.

In another project we’ve taken 80,000 of these ABF – the Artificial Bacteria of Flagella. We took this huge swam and injected it into the peritoneal cavity of a mouse and we’ve tracked it swimming within it. That paper’s recently come out.

Audrow Nash: What applications are there in that?

Brad Nelson: What we’re trying to do is demonstrate how we could potentially load some kind of drug on to these, and then target a particular location in the body. We demonstrated that we can load DNA and do transfection, then we demonstrated that we can move swarms of them and target particular locations, in this case in a mouse. Then, eventually, the furthest steps would be putting some kind of a drug that may treat a condition on there, and trying to target some very specific location, maybe a tumor in the body, and getting them to swarm around it and deliver the drug in a very localized area.

Audrow Nash: What do you think is the future and timeline of micro and nano robots?

Brad Nelson: I really think the field has made tremendous strides in the past 10 to 15 years. There are so many interesting problems and so many results that surprise me. I certainly think that within the next decade we’ll see these devices in clinical trials.

If we’re looking at medical applications, the first people we have to get interested in this are the doctors, the people who are really going to use it. They have to understand what the potential is and they have to understand what conditions could be treated with it. They have to get excited about it and be willing to work with us on it.

The first thing is you get doctors involved. Doctors have spent some of their valuable time with us on this. It takes tremendous investment and resources to get these things pushed through, to get all the engineering done properly and the FDA regulations, or whatever medical regulations need to be followed. That means you need business people; you need investment, so you need to convince business people too.

You’ve got to bring together, not just the engineers and scientists, but also the medical doctors and the business side. That’s what always makes it difficult to predict exactly how long this will take. With the right resources we can move very quickly, but not everybody’s vision is in alliance, initially. I certainly hope and expect that within less than a decade we’ll see applications in the clinics. I think, with this kind of thing, once it gets going, more and more things – things we can’t see right now – will start to emerge. I’m very confident about that.

Audrow Nash: Thank you.

Brad Nelson: Thank you very much.


All audio interviews are transcribed and edited for clarity with great care, however, we cannot assume responsibility for their accuracy.

If you like this post you may also enjoy:

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 03, 2015, 05:00:04 PM


Free for students, enthusiasts, hobbyists, and startups in Graphics and Video Software

I will never use it but maybe some one here with a steady hand will find it helpful.


D CAD reinvented
Fusion 360TM is the first 3D CAD, CAM, and CAE tool of its kind. It connects your entire product development process in a single cloud-based platform that works on both Mac and PC.

1 Comment | Started October 03, 2015, 07:56:36 AM


IFR predicts 15% industrial robot growth through 2018 in Robotics News

IFR predicts 15% industrial robot growth through 2018
2 October 2015, 8:42 pm

Source: IFRSource: IFR At a press event to launch the IFR’s World Robotics Industrial Robots statistical review of 2014 (with projections through 2018), the $1,350 report forecasts a 15% CAGR, thereby doubling the annual number of units sold to around 400,000 by 2018.

70% of those sales will be to users in China, Japan, the U.S., South Korea and Germany. China purchased 56% more robots in 2014 than 2013, of which approximately 17,000 were made by Chinese vendors. The IFR is forecasting Asian robot sales units to increase from about 140,000 to 275,000 by 2018, by far the largest and fastest growing marketplace in the world. They are projecting 15% CAGR worldwide, but as the chart shows, much lower growth than that for Asia and China in particular.

Specifically, sales for the global robotics industry in 2014 were $10.7 billion, a 13% increase over 2013. There are now about 1.5 million robots at work globally, an increase of 11% over 2013. Adding supporting services such as integration, accessories, peripherals, software and systems engineering at a 3X multiplier, worldwide 2014 sales are estimated to be $32 billion.

The report suggests that rapid automation in China and global competition of industrial production are the main drivers for the sustained growth forecasts. Further reasons for the increased demand for robots include:

  • the continuing strong auto industry
  • manufacturing of electronics is also cranking out ever-higher volumes
  • energy-efficiency and new materials are replacing older robots
  • the reduction in the length of time manufacturing lines stay up as product varieties increase and life cycles decrease is causing increased retooling and upgrading of automation equipment.
Source: IFRSource: IFR  

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 03, 2015, 11:00:03 AM


Key take-aways from the DroneApps Forum in Robotics News

Key take-aways from the DroneApps Forum
2 October 2015, 4:27 pm

DroneApps_Conference_logoThe first DroneApps Forum in Lausanne, Switzerland, recently assembled 150 drone professionals from Europe, the US, Japan, and Australia for a two-day exchange of ideas. What set the event apart was its focus on reporting from commercial drone makers and commercial drone users — a more-than-welcome change from the orchestrated corporate news releases, crowdfunding (over-)promises, and CGI marketing stunts we’ve become accustomed to. 

European, Japanese, and US drone manufacturers were in attendance, with many reporting on the number of flight hours logged by their customers and their market shares. They pointed out current challenges, including the urgent need for more situational awareness for inspection tasks with multicopters, and the need for higher endurance and higher payload for monitoring with small fixed wing planes. 

Operators, including Robin Murphy (Roboticists Without Borders, Center for Robot-Assisted Search and Rescue), Cyberhawk, COWI, and the SNCF (French national railway company), presented case studies that highlighted the benefits and downsides of using drones to replace existing solutions. 

Regulators and legal professionals discussed the current state of legislation in the UK, France, Switzerland, the US, and Canada. They openly shared their concerns related to privacy, criminal activity, and liability — but also pointed out the importance to stay level-headed in spite of anxieties and fears. 

“Not every new technology has to result in a hectic activity of regulators.” – Peter Müller 

Discussions also covered software providers, including those providing aerial image processing such as Pix4d, Acute3D, and Agisoft. 

A number of the presentations are available here.

The picture emerging from this particular 2-day snapshot of the state of the industry showed what’s working today (e.g., fixed wing drones for surveying in the mining industry) and what isn’t (e.g., multicopters for autonomous inspection). Autonomous flight through airspace without obstacles seems to be well understood and operators have logged thousands of hours. Similarly, autonomous takeoff and landing of small fixed wing planes seems to be working remarkably well. However, autonomous flight near obstacles, such as that required for most inspection tasks, remains a challenge without adequate solution across all platforms. 

One key take-away from the forum was the stark contrast between doers, who constantly fly, and talkers, who stage drone flights for marketing and PR to help sell their vision. (Don’t get me wrong — I think both are valid ways to boot-strap a new business, each with its own risks.)

Some other key take-aways from this particular event: 

  • Innovation is led by startups, some of which have already accumulated significant operational experience. Major established players like Lufthansa or Swiss WorldCargo are lagging far behind. 
  • Most hardware and electronics seem to be coming out of Europe. Most high-level software, apps, and cloud applications seem to be coming out of the US. 
  • Drones are used much more to improve current solutions than to pioneer new application areas. 
  • In spite of all the media hype, it’s still very early days for commercial drones. 

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 03, 2015, 05:01:39 AM


ROSCon Livestream Oct. 3-4 in Robotics News

ROSCon Livestream Oct. 3-4
2 October 2015, 3:22 pm


This year’s ROSCon is nearly upon us and we will be livestreaming presentations free of charge, courtesy of Qualcomm, beginning 9:00 a.m. CEST, 2015, 12:00 a.m. PDT or 3:00 a.m. EDT on October 3, 2015. Check out the program here.

Click here for the live stream

All sessions will also be recorded and made available for viewing in the near future. Follow @OSRFoundation for announcements about their availability.

If you’re attending in-person, we look forward to seeing you soon!

#ROSCon15 Tweets

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 02, 2015, 11:00:31 PM


Soft robotic gripper can pick up and identify wide array of objects in Robotics News

Soft robotic gripper can pick up and identify wide array of objects
2 October 2015, 8:05 am

Team's silicone rubber gripper can pick up egg, CD & paper, and identify objects by touch aloneTeam’s silicone rubber gripper can pick up egg, CD & paper, and identify objects by touch alone By Adam Conner-Simons, MIT CSAIL

Robots have many strong suits, but delicacy traditionally hasn’t been one of them. Rigid limbs and digits make it difficult for them to grasp, hold, and manipulate a range of everyday objects without dropping or crushing them.

Recently, CSAIL researchers have discovered that the solution may be to turn to a substance more commonly associated with new buildings and Silly Putty: silicone.

At a conference this month, researchers from CSAIL Director Daniela Rus’ Distributed Robotics Lab demonstrated a 3-D-printed robotic hand made out of silicone rubber that can lift and handle objects as delicate as an egg and as thin as a compact disc.

Just as impressively, its three fingers have special sensors that can estimate the size and shape of an object accurately enough to identify it from a set of multiple items.

“Robots are often limited in what they can do because of how hard it is to interact with objects of different sizes and materials,” Rus says. “Grasping is an important step in being able to do useful tasks; with this work we set out to develop both the soft hands and the supporting control and planning systems that make dynamic grasping possible.”

The paper, which was co-written by Rus and graduate student Bianca Homberg, PhD candidate Robert Katzschmann, and postdoc Mehmet Dogar, will be presented at this month’s International Conference on Intelligent Robots and Systems.


The hard science of soft robots

The gripper, which can also pick up such items as a tennis ball, a Rubik’s cube and a Beanie Baby, is part of a larger body of work out of Rus’ lab at CSAIL aimed at showing the value of so-called “soft robots” made of unconventional materials such as silicone, paper, and fiber.

Researchers say that soft robots have a number of advantages over “hard” robots, including the ability to handle irregularly-shaped objects, squeeze into tight spaces, and readily recover from collisions.

“A robot with rigid hands will have much more trouble with tasks like picking up an object,” Homberg says. “This is because it has to have a good model of the object and spend a lot of time thinking about precisely how it will perform the grasp.”

Soft robots represent an intriguing new alternative. However, one downside to their extra flexibility (or “compliance”) is that they often have difficulty accurately measuring where an object is, or even if they have successfully picked it up at all.

That’s where the CSAIL team’s “bend sensors” come in. When the gripper hones in an object, the fingers send back location data based on their curvature. Using this data, the robot can pick up an unknown object and compare it to the existing clusters of data points that represent past objects. With just three data points from a single grasp, the robot’s algorithms can distinguish between objects as similar in size as a cup and a lemonade bottle.

“As a human, if you’re blindfolded and you pick something up, you can feel it and still understand what it is,” says Katzschmann. “We want to develop a similar skill in robots — essentially, giving them ‘sight’ without them actually being able to see.”

The team is hopeful that, with further sensor advances, the system could eventually identify dozens of distinct objects, and be programmed to interact with them differently depending on their size, shape, and function.      


How it works

Researchers control the gripper via a series of pistons that push pressurized air through the silicone fingers. The pistons cause little bubbles to expand in the fingers, spurring them to stretch and bend.

The hand can grip using two types of grasps: “enveloping grasps,” where the object is entirely contained within the gripper, and “pinch grasps,” where the object is held by the tips of the fingers.

Outfitted for the popular Baxter manufacturing robot, the gripper significantly outperformed Baxter’s default gripper, which was unable to pick up a CD or piece of paper and was prone to completely crushing items like a soda can.

Like Rus’ previous robotic arm, the fingers are made of silicone rubber, which was chosen because of its qualities of being both relatively stiff, but also flexible enough to expand with the pressure from the pistons. Meanwhile, the gripper’s interface and exterior finger-molds are 3-D-printed, which means the system will work on virtually any robotic platform.

In the future, Rus says the team plans to put more time into improving and adding more sensors that will allow the gripper to identify a wider variety of objects.

“If we want robots in human-centered environments, they need to be more adaptive and able to interact with objects whose shape and placement are not precisely known,” Rus says. “Our dream is to develop a robot that, like a human, can approach an unknown object, big or small, determine its approximate shape and size, and figure out how to interface with it in one seamless motion.”

This work was done in the Distributed Robotics Laboratory at MIT with support from The Boeing Company and the National Science Foundation.


Paper: “Haptic Identification of Objects using a Modular Soft Robotic Gripper”

Bianca Homberg

Daniela Rus

Distributed Robotics Lab

[article on csail.mit.edu]

Source: Robohub

To visit any links mentioned please view the original article, the link is at the top of this post.

Started October 02, 2015, 05:00:05 PM


Helen Kane - Dont Be Like That (1928) in Video


2 Comments | Started April 17, 2015, 08:08:31 PM
Why did HAL sing ‘Daisy’?

Why did HAL sing ‘Daisy’? in Articles

...a burning question posed by most people who have watched or read “2001: A Space Odyssey”: that is, why does the computer HAL-9000 sing the song ‘Daisy Bell’ as the astronaut Dave Bowman takes him apart?

Sep 04, 2015, 09:28:55 am

Humans in Robots on TV

Humans is a British-American science fiction television series. Written by the British team Sam Vincent and Jonathan Brackley, based on the award-winning Swedish science fiction drama Real Humans, the series explores the emotional impact of the blurring of the lines between humans and machines.

Aug 28, 2015, 09:13:37 am
Virtual Talk

Virtual Talk in Chatbots - English

[iTunes app] Virtual Talk is a AI chatting app that makes you talk with whomever you want. It remembers what you say and learns new dialogs. This app is one of the smartest chatbots in the world.

Aug 17, 2015, 13:33:09 pm
Robot Overlords

Robot Overlords in Robots in Movies

Not long after the invasion and occupation of Earth by a race of powerful robots wanting human knowledge and ingenuity, humans are confined to their homes. Leaving without permission would be to risk their lives. Monitored by the electronic implants in their necks, the robot sentries are able to track the movements of humans in order to control them. And if any person comes out of their home, they are given warnings by the robot sentries to get inside their home. If they do not comply, they are shot immediately.

Long article on the making of here...

Aug 15, 2015, 14:42:25 pm

Zerfoly in Chatbots - English

Zerfoly is a chatbot platform that makes it possible to create imaginary persons (chatbots) and teach them to talk to each other.

You will be able to let loose your creativity and imagination. Build persons, by writing interactive dialogues. The persons you create will gradually become individuals with unique personalities. One of the persons could bear your name and learn to talk like you; your alter ego. Another way of using Zerfoly is as an interactive diary.

Aug 09, 2015, 11:06:42 am

YARP in Robotics

YARP is plumbing for robot software. It is a set of libraries, protocols, and tools to keep modules and devices cleanly decoupled. It is reluctant middleware, with no desire or expectation to be in control of your system. YARP is definitely not an operating system.

Jul 31, 2015, 16:23:49 pm

Kimbot in Chatbots - English

Kimbot uses simple text pattern matching to search its database of past conversations for the most reasonable response to a given query. It learns by associating questions it asks with the responses that are given to it.

Jul 08, 2015, 10:10:06 am
Telegram Bot Platform

Telegram Bot Platform in Chatbots - English

Telegram is about freedom and openness – our code is open for everyone, as is our API. Today we’re making another step towards openness by launching a Bot API and platform for third-party developers to create bots.

Bots are simply Telegram accounts operated by software – not people – and they'll often have AI features. They can do anything – teach, play, search, broadcast, remind, connect, integrate with other services, or even pass commands to the Internet of Things.

Jul 06, 2015, 18:13:45 pm
ConceptNet 5

ConceptNet 5 in Tools

ConceptNet is a semantic network containing lots of things computers should know about the world, especially when understanding text written by people.

It is built from nodes representing words or short phrases of natural language, and labelled relationships between them. (We call the nodes "concepts" for tradition, but they'd be better known as "terms".) These are the kinds of relationships computers need to know to search for information better, answer questions, and understand people's goals.

Jun 11, 2015, 07:34:13 am
Star Trek: The Motion Picture

Star Trek: The Motion Picture in Robots in Movies

[V'Ger a sexy android] In 2273, a Starfleet monitoring station, Epsilon Nine, detects an alien force, hidden in a massive cloud of energy, moving through space towards Earth. The cloud destroys three of the Klingon Empires' new K't'inga-class warships and the monitoring station en route. On Earth, the starship Enterprise is undergoing a major refit; her former Captain, James T. Kirk, has been promoted to Admiral and works in San Francisco as Chief of Starfleet Operations. Starfleet dispatches Enterprise to investigate the cloud entity as the ship is the only one in intercept range, requiring her new systems to be tested in transit. ~Wikipedia

Aug 06, 2008, 18:03:52 pm