Robotics and the World
Table of Contents:
1) The Future of Robots
2) Programming Concepts
3) Robot Control
4) Robot Hardware
5) Mathematics of Robot Control
6) Robot Programming
7) Obstacle Avoidance
8) Task Planning and Navigation
9) Robot Vision
10)Knowledge Based Vision Systems
11)Robots and Artificial Intelligence
1. The Future of RobotsEdit
Robots will soon be everywhere, in our home and at work. They will change the way we live. This will raise many philosophical, social, and political questions that will have to be answered. In science fiction, robots become so intelligent that they decide to take over the world because humans are deemed inferior. In real life, however, they might not choose to do that.
Robots will be commonplace: in home, factories, agriculture, building & construction, undersea, space, mining, hospitals and streets for repair, construction, maintenance, security, entertainment, companionship, care.
Purposes of Future Robots:
-Robotized space vehicles and facilities
-Anthropomorphic general-purpose robots with hands like humans used for factory jobs
-Intelligent robots for unmanned plants
-Totally automated factories will be commonplace.
-Robots for guiding blind people Robots for almost any job in home or hospital, including Robo-surgery.
-Housework robots for cleaning, washing etc
-Domestic robots will be small, specialized and attractive.
It will greatly affect the economy as well. Given that in the next two decades robots will be capable of replacing humans in most manufacturing and service jobs, economic development will be primarily determined by the advancement of robotics. Given Japan's current strength in this field, it may well become the economic leader in the next 20 years.
2. Programming ConceptsEdit
The development of robot programming concepts is almost as old as the development of robot manipulators itself. As the ultimate goal of industrial robotics has been the development of sophisticated production machines with the hope to reduce costs in manufacturing areas like material handling, welding, spray-painting and assembly, tremendous efforts have been undertaken by the international robotics community to design user-friendly and at the same time powerful methods. The evolution reaches from early control concepts on the hardware level via point-to-point and simple motion level languages to motion-oriented structured robot programming languages. The robot programming languages can be classified according to the robot reference model, the type of control structure used for data, type of motion specification, the sensors, the interfaces to external machines, and the peripherals used. The following types of programming languages are available.
Point to point motion language, Basic motion languages at assembler level, Non-structured high level programming languages, Structured high level programming languages, NC type languages, Object oriented languages, Task oriented languages.
3. Robot ControlEdit
There are many ways to design software for controlling a robot. The focus is not on low-level coding issues, but on high level concepts about the special situations robots will encounter and ways to address these peculiarities. The approach taken here proposes and examines some control software architectures that will comprise the brains of the robot.
Probably the biggest problem facing a robot is overall system reliability. A robot might face any combination of the following failure modes:
Mechanical Failures - These might range from temporarily jammed movements to wedged geartrains or a serious mechanical breakdown.
Electrical Failures - We hope it is safe to assume that the computer itself will not fail but loose connections of motors and sensors are a common problem.
Sensor Unreliability - Sensors will provide noisy data (data that is sometimes accurate, sometimes not) or data that is simply incorrect (touch sensor fails to be triggered).
The first two of the above problems can be minimized with careful design, but the third category, sensor unreliability, warrants a closer look. Before discussing control ideas further, here is a brief analysis of the sensor problem.
An example of robot control is when it interacts with a wall. In a worst-case scenario, what could happen while a robot was merrily running along, following a wall? Several possibilities:
1. The robot could run into an object or a corner, properly triggering a touch sensor.
2. The robot could run into an object or corner, not triggering a touch sensor.
3. The robot could wander off away from the wall.
4. The robot could slam into the wall, get stuck, and conditionally trigger a touch sensor.
5. The proximity sensor could fall off its mount, causing a series of incorrect sensor readings. Ideally, control software should expect occurrences of cases like those numbered #1 through #4 and be able to detect case #5.
4. Robot HardwareEdit
Wheels Robot builders often find that the trickiest part of a robotics project is making the wheels. First you need to find a suitable tire/wheel combination, then you must figure out a way to attach a sprocket so that it will handle the torque of a geared-down drive motor.
Motors From the start, operating motors seem quite simple. Apply a voltage to both terminals, and it spins. But what if you want to control which direction the motor spins? Correct, you reverse the wires. Now what if you want the motor to spin at half that speed? You would use less voltage. But how would you get a robot to do those things autonomously? How would you know what voltage a motor should get? Why not 50V instead of 12V? What about motor overheating? Operating motors can be much more complicated than you think.
Sensors The light sensor uses a photocell that allows your robot to detect and react to light. With the light sensor, you can program a whole new range of capabilities to your robot. Design a simple tracker that follows the beam of a flashlight, or use a light sensor to help your robot to avoid getting stuck under furniture by making it steer away from shadows. You can even give your robot color vision by putting colored filters on different light sensors!
5. Mathematics of a RobotEdit
Mathematics in robotics mainly involves roboto kinematics. Robot kinematics is the study of the motion (kinematics) of robots. In a kinematic analysis the position, velocity and acceleration of all the links are calculated without considering the forces that cause this motion. The relationship between motion, and the associated forces and torques is studied in robot dynamics. One of the most active areas within robot kinematics is the screw theory.
Robot kinematics deals with aspects of redundancy, collision avoidance and singularity avoidance. While dealing with the kinematics used in the robots we deal each parts of the robot by assigning a frame of reference to it and hence a robot with many parts may have many individual frames assigned to each movable parts. For simplicity we deal with the single manipulator arm of the robot. Each frames are named systematically with numbers, for example the immovable base part of the manipulator is numbered 0, and the first link joined to the base is numbered 1, and the next link 2 and similarly till n for the last nth link.
Robot kinematics are mainly of the following two types: forward kinematics and inverse kinematics. Forward kinematics is also known as direct kinematics. In forward kinematics, the length of each link and the angle of each joint is given and we have to calculate the position of any point in the work volume of the robot. In inverse kinematics, the length of each link and position of the point in work volume is given and we have to calculate the angle of each joint.
6. Robot ProgrammingEdit
Programmer Hierarcy http://funstuff.lefora.com/2008/03/10/hierarchy-programming-languages/page1/
How to Program a Robot
Step1- Buy a factory-built robot. There are a few different manufacturers, but the most popular and well-established maker of domestic robots is the iRobot company. You can visit their site by following the link posted below.
Step2- Set the internal clock on your factory-built robot. It may come with an atomic or radio-controlled clock already in it, which means you will only have to turn it on to set the time. Once the robot is set to the right date, schedule the times that you would like the robot to operate. For cleaning robots, it’s usually when you are away from the home. Some robots also may require the measurements of the room it is to be traveling in.
Step3- Build a robot. This step is for the far more advanced robot users. The parts and construction of a robot largely depend on what the robot’s primary function will be. If you want the robot to carry things around, it will probably look like an arm mounted on wheels. Because of the large variety of different robots and the complex nature of their construction, it is advised to seek out specific plans for the robot you wish to build.
Step4- Write the code for your robot. Again, this seems like a vague and huge task for one step, and it is. There are a couple of different programming languages you can write your code in depending on the software you are using. The code that you write will also depend on what the robots primary function is. Since you don’t want your robot to get stuck in a corner, a common piece of programming deals with what to do when in such a situation. The programming should vaguely resemble basic reasoning. For example, IF left sensor detects an object THEN turn the wheels to the right. Programming requires a lot of foresight and trial and error.
Step5- Test your programming. This is important for both factory and home-built robots. Run the robot through all possible situations it may encounter and take note of how it performs. Go back and fix the code as you see fit.
7. Obstacle AvoidanceEdit
Obstacle Avoidance is a robotic discipline with the objective of moving vehicles on the basis of the sensorial information. The use of these methods front to classic methods (path planning) is a natural alternative when the scenario is dynamic with an unpredictable behaviour. In these cases, the surroundings do not remain invariable, and thus the sensory information is used to detect the changes consequently adapting moving.
The research conducted faces two major problems in this discipline. The first is to move vehicles in troublesome scenarios, where current technology has proven limited applicability. The second one is to understand the role of the vehicle characteristics (shape, kinematics and dynamics) within the obstacle avoidance paradigm.
Most obstacle avoidance techniques do not take into account vehicle shape and kinematic constraints. They assume a punctual and omnidirectional vehicle and are doomed to rely on approximations. Our contribution is a framework to consider shape and kinematics together in a exact manner, in the obstacle avoidance process, by abstracting these constraints from the avoidance method usage. Our approach can be applied to many non holonomic vehicles with arbitrary shape.
For these vehicles, the configuration space is 3 dimensional, while the control space is 2-dimensional. The main idea is to construct (centered on the robot at any time) the two-dimensional manifold of the configuration space that is defined by elementary circular paths. This manifold contains all the configurations that can be attained at each step of the obstacle avoidance and is thus general for all methods. Another important contribution of the paper is the exact calculus of the obstacle representation in this manifold for any robot shape (i.e. the configuration regions in collision). Finally, we propose a change of coordinates of this manifold in such a way that the elementary paths become straight lines. Therefore, the 3-dimensional obstacle avoidance problem with kinematic constraints is transformed into a simple obstacle avoidance problem for a point moving in a 2-dimensional space without any kinematic restriction (the usual approximation in obstacle avoidance). Thus, existing avoidance techniques become applicable.
Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, such as labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. Defining specific types of semantic maps is key, which integrates hierarchical spatial information and semantic knowledge. Semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. Several experiments demonstrate the effectiveness of solutions in a domain involving robot navigation in a domestic environment.
For any mobile device, the ability to navigate in its environment is one of the most important capabilities of all. Staying operational, i.e. avoiding dangerous situations such as collisions and staying within safe operating conditions (temperature, radiation, exposure to weather, etc.) come first, but if any tasks are to be performed that relate to specific places in the robot environment, navigation is a must. In the following, we will present an overview of the skill of navigation and try to identify the basic blocks of a robot navigation system, types of navigation systems, and closer look at its related building components.
Robot navigation means its ability to determine its own position in its frame of reference and then to plan a path towards some goal location. In order to navigate in its environment, the robot or any another mobility device requires representation i.e. a map of the environment and the ability to interpret that representation. Then we talk about robot which is very useful for college and school students. Navigation can be defined as the combination of the Three fundamental competences:
9. Robot VisionEdit
One of the most fundamental tasks that vision is very useful for is the recognition of objects (be they machine parts, light bulbs, etc) Evolution Robotics introduced a significant milestone in the near-realtime recognition of objects based on various points. The software identifies points in an image that look the same even if the object is moved, rotated or scaled by some small degree. Matching these points to previously seen image points allows the software to 'understand' what it is looking at even if it does not see exactly the same image.
As the hobbyist robotics market rapidly grows so too are the machine vision choices that the hobbyist has at their disposal. The CMUCam (initially created at Carnegie Mellon) is by far the most popular vision camera that can track an object based on its color and even move the camera if it is mounted on servos (small motors) to track the object. At a low price and basic usage it has become very widely used by hobby and academic roboticits.
10. Knowledge Based Vision SystemsEdit
Knowledge Based Vision Systems are described as which configurates automatically programs for image processing and supports the recognition of objects. The system runs in two phases. In the first phase based on the primitives (curved edges, corners etc.) and the explicit specification of the content of the image given by the user a sequence of operators will be generated and all their free parameters will be computed adaptivelly. In this phase the system uses a rule base composed of knowledge of visual processing operators, their parameters, and their interdependence. In the second phase a hierachical object model is formulated and edited by the user based on the primitives selected in the first phase. The system editor is specially provided for this purpose. Using the hierachical object model facilitates a rapid interpretation of the result obtained from the previous image processing for the subsequent object recognation.
11. Robots and Artificial IntelligenceEdit
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it. Major AI textbooks define the field as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of human beings, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of breathtaking optimism, has suffered stunning setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, so much so that some critics decry the "fragmentation" of the field. Subfields of AI are organized around particular problems, the application of particular tools and around longstanding theoretical differences of opinion. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.
- Robotics has details on how to build small robot hardware
- Artificial Intelligence
- Embedded Systems has details on typical robot CPUs, and how to program them
||A Wikibookian believes this page should be split into smaller pages with a narrower subtopic.
You can help by splitting this big page into smaller ones. Please make sure to follow the naming policy. Dividing books into smaller sections can provide more focus and allow each one to do one thing well, which benefits everyone.
||A reader requests expansion of this book to include more material.
You can help by adding new material (learn how) or ask for assistance in the reading room.