Robotics/Computer Control/Control Architectures/Swarm Robotics

Overview edit

Swarm robotics is a relatively new study in the world of autonomous robotics. Swarm optimizations however have been around for over a decade for some functions. These techniques have recently been adapted and applied to autonomous robotic applications. On the topic of swarm robotics the word swarm is defined with several variations. On first hearing the phrase “swarm robotics” many simply think of basic interactions of robots such as follow the leader or simply data relayed between robots all traveling towards a common goal. However swarm robotics has taken further advancement in complex problem solving. Robots may now be given a goal and a basic suggestion on how to meet that goal. This suggestion may not be the most efficient way to achieve the goal much less even make it to the goal. So there must be a way to improve upon or optimize that suggestion. Optimization functions are then introduced. Several techniques covered here include Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). These optimization functions will be covered in depth in the following.

Basic Swarm Robotics edit

As stated earlier basic swarm robotics includes simple functions such as follow the leader and robot-to-robot interactions such as tag. In these scenarios the robots do not actively improve upon their current goal. They simply interact with one another in basic principles to achieve a predefined goal.

Examples edit

File:I-swarm.jpg

Swarm robotics is seen many times on a microrobotic scale as pictured above due to a number of factors. One of the main contributing factors to size is size in itself. A robot swarm composed of large robots can be bulky, expensive, and inefficient in data gathering. Obviously size does limit onboard equipment but that is where the principle of swarm shines. All the robots combined contribute a little more detail to the situation offering a complete picture. The swarm above funded by the EU is designed to combine all electrical aspects onto a single flexible board. The swarm has no major pending goals at this time and their behaviors are modeled after biological insects.


5px‎

James McLurkin, assistant professor at Rice University has developed his own DARPA funded swarm. His experimentation has explored physical data structures such as swarm arrangement according to ID number. Along with this he has also written a uniform dispersion algorithm. Additional swarm research includes his work on Robot Speed Ratios which is the study of message propagation versus the physical speed of the robot. This study was to uncover issues with the robots moving faster than they can physically interpret relayed messages.


  • Carnegie Mellon: Magnetic Swarms

Research at Carnegie Mellon has been underway with a magnetic swarm of robots. The goal of this project is to develop a swarm of robots that could magnetically shape shift without any plane limitations. This could be used on the microscopic level as a three dimensional physical modeling aid. The robots pictured above use an array of electromagnets to control interactions between adjacent robots.

  • Free University of Brussels ("L'Université libre de Bruxelles")

Several IRIDIA projects at ULB involve Swarm Intelligence. Some of their software and hardware designs are posted at GitHub.

Ant Colony Optimization (ACO) edit

ACO is one of the simplest swarm optimization techniques proven to work reliably. This optimization technique was drawn directly from nature itself, showing that it was already a viable solution. The concept requires a brief understanding of how ants function in nature. Keep in mind, this definition has been tailored to our “perfect” example. In nature, ants wander at random. If an ant is to discover food it begins emitting pheromones and starts to wander back to its nest. At some point another ant discovers food and wanders back to the nest. This cycle happens many times. If an ant wandering back to the colony is to discover one of these pheromone trails it is more likely to travel it. If a fork is encountered biasing occurs via pheromone intensity. This means, ants are more likely to travel a trail with a stronger intensity of pheromones. Pheromones being volatile will evaporate over time. This means that the more ants traveling a trail the more pheromones are emitted. However, if the same number of ants travel a long trail and a short trail over the same period of time the intensity of pheromones will be greater on the shorter trail. This means bias will always be placed on the shorter, more efficient trails thus naturally selecting the best solution.

This same process can be simulated in our electrical world. A proposed solution is established and the “ants” navigate to it. The initial solution is then modified to create the most efficient solution. Below is a basic depiction of ACO. As seen there are initially two trails established which happen to overlap in the middle. These two trails offer up four possible solutions to the optimized problem. Ants then randomly travel all four possibilities. The possibilities are narrowed down via the pheromones. All four solutions are traveled by the same number of ants but the shorter solutions have a greater intensity of pheromones biasing more ants towards the shorter trails thus eliminating all other solutions and converging on the best fit solution.

 

Particle Swarm Optimization (PSO) edit

PSO steps further away from stochastic functions towards an even more intelligent solution to a problem. Much of the PSO functionality is modeled after natural associations such as a flock of birds or a school of fish. In a PSO function, a generic solution is first given. Then a swarm of “particles” are initialized. They begin to attempt to make it to their goal. As they move towards their goal, they monitor certain other particles within a user-defined range of their own location. The particles around them contribute to a local best (lbest). This is the most fit solution discovered so far. Each particle also monitors its personal best (pbest) along with the global best (gbest) so far. Keep in mind that in some simulations lbest=gbest and the particles around each other do not have a local effect on each other. The particles are then biased towards a compilation of the lbest and pbest along with gbest. Their new velocity vector also includes a random scaling factor to prevent too early of a convergence on an inefficient solution. An equation is provided below to provide basic understanding of a swarm monitoring their pbest and gbest.

General PSO Equation

 

This equation also provides learning factors which in this case are scaled to “2”. From this equation we can see that the new velocity is equal to the scaled value of the sum of the difference of pbest-present and gbest-present. As it can be seen this optimization techniques offers a highly accurate convergence on the best fit solution to the problem.

File:Particleswarm.jpg

  • Provided above is a link to a java applet developed to help understand the operation of PSO. The animation applet can be found here.