What's interesting in robotics

Basics of robotics

Controllable machines have fascinated mankind since ancient times. But where do robotics and robots have their roots? What were the milestones of the past and what does the future hold? What are the key components and how do these machines work? What are cobots and nanobots? How intelligent are robots really? Can they develop feelings and learn social behavior? We shed light on the most important facets of a technology that will fundamentally influence the future of mankind.

Robotics is a sub-area of ​​engineering and natural sciences that includes mechanical engineering, electrical engineering, computer science and others. Robotics deals with the design, construction, operation and use of robots and computer systems for their control, sensory feedback and information processing. A robot is a unit that implements this interaction with the physical world on the basis of sensors, actuators and information processing. A central area of ​​application of robots is industry, more precisely in Industry 4.0, where so-called industrial robots are used.

Areas of application and advantages of robot technology

Cobots

Collaborative robots or cobots (abbreviation of the English term “collaborative robot”) are becoming increasingly important. Conventional industrial robots are being replaced or expanded in more and more areas of industry by collaborative robots. Cobots are used in cooperation with people in the production process and are no longer separated from their human colleagues by protective devices like the typical industrial robot. In comparison to traditional industrial robots, collaborative robots are more compact, more flexible to use and easier to program.

Cobots do not replace human workplaces, but complement them. The Canadian company Paradigm Electronics offers an example: There productivity has been increased by 50 percent through the use of cobots without any job losses. There, the staff takes on tasks in newly created areas of activity such as programming the machines or quality control at the end of the mechanical production process. Experts from the management consultancy Boston Consulting Group assume that the use of robots will increase productivity per employee by up to 30 percent in the future.

Industrial robots

Programmable machines that are used to handle, assemble or process workpieces in an industrial environment are called industrial robots. These robots largely consist of a robot arm, a gripper, various sensors and a control unit. You can also carry out actions autonomously according to your programming. The global density of robots has increased significantly in recent years: while in 2015 there was an average of 66 units per 10,000 employees, the number is currently 74 units. In Europe, the average robot density is 99, in the USA it is 84, in Asia 63.

According to statistics from the IFR (International Federation of Robotics, the international umbrella organization of all national robotics associations), the USA recorded a new high of industrial robots in 2016 with around 31,500 installed units, an increase of 15 percent compared to 2015. Worldwide were in Around 290,000 industrial robots were in use in 2016, 14 percent more than in 2015. The trend will continue in the future: an average of 12 percent pa growth is expected for the coming years.

Industrial robots in the automotive industry

In this key industry for robotics, machines have played an important role in automated production processes for more than 50 years in order to make processes more efficient, safer, faster and more flexible. The first industrial robot Unimate was integrated into the production process at General Motors as early as 1961. The robot was used there to remove injection molded parts. In 1973 the first industrial robot started work at VW in Wolfsburg. The in-house development, named “Robby” by his human colleagues, was used in the production of the Passat model. According to a statistical survey by the IFR (International Federation of Robotics, the international umbrella organization of all national robotics associations), more than 17,600 industrial robots were in use in the US automotive industry in 2016, 43 percent more than in 2015.

Autonomous transport systems / AGV

An AGV (Automated Guided Vehicle) is a driverless transport vehicle with its own drive, which is automatically controlled and guided without contact. AGVs are typically used to move materials in a manufacturing facility. In the industrial environment, they stand for the development from the classic, bulky conveyor belt to the space-saving, highly flexible solution. Another popular place of use for AGVs are warehouses, in which individual goods or large assortments of goods are brought to defined packing stations and processed there. This type of robot usually has movement speeds of approx. 1 - 2 meters per second and is able to transport payloads of up to approx. 2,000 kilograms. AGVs differ in terms of the type of power supply, the areas of responsibility and the navigation and routing method. Power is supplied either via a cable (for rail-mounted AGVs), the rail itself, or a battery. This is charged via an induction charging mat or at charging stations, where it can also be exchanged. AGVs can either move pallets as forklift trucks, pull trailers as tractors or transport boxes or packages as moving loading surfaces, depending on the area of ​​activity and the place of use. An AGV can be navigated using a laser, for example, whereby the robot scans labels attached to certain locations and thus finds its next destination. Optical navigation by recognizing colors etc. is another option. Antennas or rails are also used to steer AGVs. The most flexible are autonomous AGVs that scan their entire environment and create virtual maps from the results. You are able to share obstacles with other AGVs and automatically generate the optimal route for the transport. Depending on the area of ​​application and the required degree of movement, AGVs are moved by one to four actively driven wheels.

Service robots

A service robot is a machine that provides services to people. A distinction is made between use for private individuals and that in a professional environment. In the private sector, for example, vacuum cleaner or lawnmower robots are established in households. Here are the different types of service robots:

Whether mowing the lawn, vacuuming or cleaning windows: Robots can take over some of the most annoying, everyday household tasks. The associated time and labor savings convinced people: According to a study by the Federal Association for Information Technology, Telecommunications and New Media e.V. (Bitkom), 42 percent of the more than 1,000 German citizens surveyed could imagine using a robot in the household. More than 80 percent would like assistance with vacuuming or mopping, and 41 percent want robots for gardening. For 15 percent of those surveyed, robots are already in use in their own four walls.

Although data protection and data security are not without concern, 49 percent of the participants in the representative Bitkom survey mentioned above can imagine entrusting the monitoring of their own home to a robot. Such a monitoring robot controls the household while the person is relaxing on vacation, on a business trip or in the office. These robots can be controlled via an app via an internet connection. If the robot detects corresponding impulses via motion detection, it sends an alarm signal to a smartphone. The integrated camera takes HD recordings and has an intercom function.

At the Consumer Electronics Show 2018 in Las Vegas, the Korean electronics group LG presented its new robot product series "CLOi". The “Serving Robot” model supplies customers with food and beverages. It can be used around the clock (e.g. at airports, train stations or hotels) and serves meals on a tray that the customer can take with them. After the service has been carried out, the robot finds its way back to fetch new snacks and carry out the next order.

Agriculture is another area that offers great potential for the use of robots. Pilot projects are currently underway in which robotic arms and multispectral cameras installed on a harvesting vehicle optimize the cucumber harvesting processes. Small planting robots, which are controlled with a tablet and not only sow the seeds, but also document all important information, help with the sowing. Drones are suitable for monitoring the degree of ripeness of plant products or the growth of weeds and, if necessary, also for spraying critical areas.

Robots are especially used as therapy companions where patients have to learn to reactivate their musculoskeletal system after a stroke or neurological disease. People suffering from paralysis learn to walk through to climbing stairs with the help of gait training machines. One robot can do the work of two therapists. The patient also receives direct feedback during the exercises. A portable walking robot (exoskeleton) enables paralyzed patients to walk without assistance. The step movements of the robot are triggered by shifting the patient's weight.

Robots also have a permanent place in the operating theater, although they do not replace the surgeon, but are used as precise helpers in minimally invasive procedures. Instead of operating surgical instruments such as scissors or tweezers himself, the surgeon controls a robot via a console with the help of a joystick and foot pedals. Interventions using an OR robot save time and are also gentler on the patient. Risks from human technical failure are minimized.

Aibo robot dog from Sony is an entertainment robot that came onto the market in 2017 in a new version after its sales ended in 2006. Aibo perceives its surroundings via two cameras and microphones. The collected data is evaluated using a learning program. In this way, the robot dog is supposed to develop an individual personality. In addition to Aibo, Roberta can also be counted as a toy robot. This initiative of the Fraunhofer Institute for Intelligent Analysis and Information Systems has been using special robots since 2002 to encourage children to playfully use technologies and to convey the fascination of their development and programming.

Humanoid robots

Machines whose construction is based on the human form are considered humanoid robots. The positions of the joints and movement sequences are inspired by the human musculoskeletal system. This is made clear not least by the fact that humanoid robots usually move on two legs in an upright gait. A main motive for research and development in the field of humanoid robots is artificial intelligence (AI).

Artificial intelligence

In the vast majority of science, the development of a humanoid robot is an important basis for the creation of a human-like AI. This view is based on the notion that AI cannot be programmed, but consists of learning processes. According to this, a robot can only develop artificial intelligence if it actively participates in social life. Active participation in social life, including communication, is only possible if the robot is perceived and accepted as an equal being in a society due to its shape, mobility and sensor technology.

Humanoid robots as multifunctional helpers

With roles instead of legs, but with a cute size, friendly voice and big googly eyes, the robot Josie Pepper is currently helping out at Munich Airport. Munich Airport is one of the first airports to test the use of a humanoid robot in live operation together with Lufthansa. Josie provides information about the current flight status, check-in information, describes the way to the departure gate or the nearest restaurant. The development of the French company Soft-Bank Robotics is connected to the Internet via WLAN and can thus access a cloud in order to process and analyze speech dialogues and link them to the airport data. In this way, Josie learns something new with every dialogue and answers questions individually.

Human-machine interaction

So that people without programming knowledge can communicate with robots and give them instructions or information or ask questions in a natural way, the interaction between humans and machines via language, gestures and facial expressions is of crucial importance in robotics.

For a machine, recognizing and interpreting natural language in real time is a highly complex process, even in the age of smart loudspeakers. This is due to variable factors such as the ambient acoustics, background noise, volume, dialects, accents or the general pitch of the voice. The accuracy of natural language recognition is now around 95 percent.

Real-time 3D data acquisition is necessary for precise recognition and interpretation of human gestures without latency. The Fraunhofer Institute for Applied Optics and Precision Mechanics is researching systems for the rapid acquisition and processing of 3D data. With two high-speed cameras and a color camera, images are recorded and converted into 36 3D data sets per second using special software. In addition, the researchers have developed adaptive software based on neural networks for the system.

The facial expression can be used to draw conclusions about the course of a conversation between two people. This should also be implemented in a dialog between humans and robots. Thanks to elastic polymers and integrated servos, the faces of the robots from the manufacturer Hanson Robotics are able to represent a variety of facial expressions. The aim is for the robot to adapt its interaction to the facial expressions of humans. With a fearful facial expression, he should, for example, keep a distance from the person in question, and give information if the facial expression is questioning.

In current research projects, robots should learn to recognize and understand human emotions and to react accordingly. With appropriate facial expressions and gestures, the robot can show or simulate emotions as a reaction to humans. One example is the Emotisk training system, which scientists from the Humboldt University of Berlin are currently developing in cooperation with the University Clinic Aachen and the University Clinic Cologne: The software evaluates information such as the direction of gaze or facial expressions and gives people appropriate emotional feedback. The aim of the system is to enable autistic people to recognize the emotions of others and to send non-verbal signals in response.

Due to the visual similarity to humans as well as the human-like behavior and trading, we tend to assign a personality to humanoid robots. Indeed, simulating a personality can influence human-machine interaction. For an experiment, Japanese scientists from the Toyohashi University of Technology have developed a robot that follows the gaze of its human interlocutor and registers as soon as he is distracted by other events. In these situations, the robot leans forward, raises its voice and nods. The result: the robot regains the attention of the human counterpart through the demonstrated personality traits.

The line between a “merely” smart robot and a social robot is currently difficult or impossible to draw. A current example is Jibo, the first social robot from the US company of the same name, which has been available since the end of 2017. According to the manufacturer, the approximately 30-centimeter-tall household robot loves to be among people and to build a relationship with them. He learns which people are important to his owner and fits seamlessly into their social life. Jibo is also charming and creates moments of surprise with spontaneous actions such as a dance. So much for the manufacturer's information. The practical test has shown that the social robot does not differ significantly from other smart systems. However, the price is significantly higher.

The increasing popularity and spread of robotics in various areas of life and the associated interaction between humans and machines offer both opportunities and challenges for the safety and protection of humans and data. The safety requirements become particularly clear when using industrial robots and collaborative robots in the work area.

When robots are used in industrial production, occupational safety measures ensure that people are protected. This includes sufficient safety distances between machine and person, safety fences, light barriers or zones monitored by scanners. Emergency switches on the robot or its ability to detect collisions with objects and people and react accordingly are also part of the safety measures. This is especially true for cobots.

With newer industrial robots, there are no separating protective devices in certain work areas. Instead, other technical protective measures are used. For example, if a person is several meters away, the robot operates in normal mode. If the person approaches, it reduces the speed from a defined threshold value. If the person comes within one meter of the immediate vicinity, it stops.

ToF (Time of Flight) technology is used in newer systems. These are 3D camera systems that measure distances using the time of flight method. For this purpose, the surroundings are illuminated with a light pulse. For each pixel, the camera measures the time it takes for the light to get to the object and back again, and thus provides the distance to the object depicted on it for each pixel. Radar sensors are also used in this area. Movements based on electromagnetic waves in the radio frequency range are detected. By combining several redundant technologies, safety for people can be additionally increased.

In a working world in which more and more complex systems are interconnected and communicate with one another, it is important to protect these systems from information theft or manipulation. In addition to the manipulation of configuration files (changing the movement areas or the position data) and code manipulation (reprogramming of processes), the manipulation of the robot feedback (deactivation of alarms) represents the greatest threats. These interventions can destroy products for Damage to robots and, in the worst case, injury to those working in this area. In order to guarantee the security of data, interfaces and communication channels, companies are increasingly opting for external software solutions. These solutions protect against the manipulation of configuration files by encrypting them and saving them in the Secure Element (SE). Authentication also prevents unauthorized access to the central control unit. In order to prevent code manipulation, software solutions offer authorization of sent commands using a hash process and code checks.

When you think of robots or robotics, your thoughts wander mainly around the last 50 years. Most will see a more or less human-looking machine with arms and legs and a friendly smile. The fascination for humanoid machines and mechanical helpers runs through the last centuries. Here are some highlights from the past that illustrate the evolution of robotics:

As early as the 1st century AD, there were inventions, machines and works that are considered the forerunners of today's robots or robotics. They come from Heron of Alexandria, a Greek mathematician and engineer. This also explains his "nickname" Mechanicus. In his work "Automata" ("Book of Machines") Heron describes various machines. Some could automatically open temple doors or play music. A first antique look at smart homes, if you will. In addition to his designs of catapult-like weapons, Mechanicus was best known for the so-called Heron's ball. This is the first heat engine, a forerunner of the steam engine. In total, the forefather of robotics designed more than 100 automatons and machines.

In 1495, multi-genius and polymath Leonardo da Vinci probably designed the first human-like machine. The "Mechanical Knight" baptized construction could sit and stand. Other functions: the raising of the visor and the complete mobility of the arms. A complex system of ropes and pulleys took care of the movements. Whether da Vinci only designed the robot at the time or actually realized it cannot be historically proven. In any case, the copies of the Robo-Knights that were recreated on the basis of the plans were fully functional.

In the science fiction play "R.U.R." by the Czech writer Karel Čapek from 1920, the word "robot" was first used in the English language. The world premiere took place on January 25, 1921. In the internationally successful piece, machines that think autonomously look very similar to humans. After becoming conscious, the robots rebel against the role of work slaves and wipe out humanity. A touch of Terminator on the theater stage.

At the New York World's Fair in 1939 "Elektro" made its grand entrance. The humanoid robot, over two meters tall and weighing more than 120 kilograms, was able to speak around 700 words thanks to an integrated record player. In addition to movable arms and legs, the mechanical man was captivating with the ability to distinguish different colored lights and smoke a cigar. In the following year of the world exhibition, Elektro performed together with the robot dog “Sparko”.

George Devol received the patent for the first industrial robot in 1961. In the same year, the Unimate was used on a General Motors assembly line. It consisted of a computer-like box connected to another box and an arm. The robot removed heavy die-cast parts from an assembly line and then welded them onto car bodies. At the time, this section of production was associated with high health risks for human workers. In addition to the danger posed by chemical substances, there have often been accidents in which workers lost limbs. In Germany, the age of industrial robots begins in the 1970s.

The Munich start-up Franka Emika was awarded the German Future Prize 2017 by the Federal President in November 2017 for the development of cost-effective, flexible and intuitive robots. The lightweight robots can be used in both the industrial and the care sector. This is made possible by torque sensors that are built into the joints and that sense human touch. Another special feature of this robot is a price point that is well below the market average and thus makes powerful, state-of-the-art robots attractive and affordable for small and medium-sized companies.

Robot drives and controls

The most important types of drive

A fundamental distinction is made between two types of drive in robotics: electric motors and hydraulic drives. What are the main features of these two types of propulsion? What are the main differences between them?

The majority of modern robots currently use electric motors. While humanoid robots and smaller specimens are mainly operated with direct current motors (also known as DC motors), three-phase motors are mainly used in larger industrial robots or CNC machines. These motors are preferred in machine systems where the robot often makes the same movement, for example a rotating arm.

The modern hydraulic drive in a robot works like an artificial muscle. Japanese developers have been working on an artificial muscle since 2014, which consists of a rubber tube, high-tensile fibers and a protective sleeve. This system, modeled on the human muscle, no longer relies on air pressure, but is moved hydraulically. The advantages of this concept: The hydraulic muscle is more powerful and can implement delicate movements at the same time. Compared to the electric motor, the system is also more robust. So robots equipped with hydraulic drives can withstand the adverse conditions in disaster areas.

The three phases of robot control

A robot is basically controlled in three phases - perception, processing and action. Most types of robots are currently controlled using preprogrammed or learning algorithms. With humanoid robots or cobots, the robot perceives its environment and other important information such as the recognition of workpieces via sensors. This information is then processed by the robot and passed on as signals to its motors, which then set the mechanical elements into action. Artificial intelligence (AI) is another way that a robot can determine how it can optimally act in its environment. In terms of control, control systems can be divided into different degrees of autonomy within the framework of human-machine interaction:

With this type of control, the human being has complete control. It controls the robot either directly haptically, via remote control or using an algorithm programmed for the control unit.

Humans specify basic positions or sequences of movements. The robot then determines for itself how to use the motors optimally within the framework of the specifications.

In these systems, the person gives a general task. The robot autonomously determines the optimal positions and motion sequences to perform the task.

The robot recognizes its tasks autonomously and carries them out completely independently.

In order to carry out production processes and move objects, robots need mechanical extremities. These are available in different versions:

This widespread form of the gripper is mainly used in industrial robots and is in most cases pneumatically or hydraulically driven. Smaller robots with correspondingly compact grippers have a pneumatic drive that enables precise movements at manageable costs. The hydraulic drive method is used for heavy payloads.

In the case of magnetic grippers, a basic distinction is made between permanent and electromagnetic grippers. With the simply constructed permanent magnet grippers, the gripping force is provided by a permanent magnet. Gripped material is stripped off with the help of a piston that is built into the interior of the permanent magnet gripper. The electromagnetic gripper is supplied with electrical direct current, which provides the required magnetic field. The material is picked up and released by switching the electrical energy on and off.

Adhesive grippers (also known as adhesive grippers) are also used to pick up smaller objects such as cans or cardboard boxes. Adhesion is understood to mean the adhesive forces on the contact surfaces of two different or identical substances due to molecular forces. The substances can be in a solid or in a liquid state. The objects are moved by the robot's gripper using the adhesive force of liquids or the use of special adhesives.

Vacuum grippers can hold large loads. The object is pressed against the sealing lips of the suction cup on the gripper by means of excess pressure in the ambient air. Heavy objects such as workpieces or car windows are then held by the negative pressure in the suction cup. These objects must have a smooth surface so that suction using a suction cup is possible.

Compared with conventional grippers, more filigree actions are possible with humanoid hands. One example is the University of São Paulo’s Kanguera project. This robotic hand is the shape and size of a human hand. The signals are transmitted via cables and a transformer, which results in greater precision compared to previous robot hands.

With the help of the integrated sensors, robots perceive physical or chemical influences of their environment and convert them into impulses. For example, objects are identified and localized. The robot can also use sensors to detect other important environmental factors such as temperature, movement, pressure, light or humidity. Internal sensors provide information about the speed or charge status, while external sensors help with interaction and navigation. The following is an overview of the most important sensor types:

One of the most commonly used types of sensors is the force / torque sensor. This is implemented in the gripper and can record both forces and torques. Strain gauges detect deformations in the micrometer range. These deformations are converted into three force and torque components using a calibration matrix. Force / torque sensors contain a digital signal processor that records and filters the sensor data in the event of deformation, calculates the measurement data and transmits it via the communication interface.

Inductive sensors are also known as proximity sensors. They detect metal parts that are in their measuring range without contact. They are therefore very well suited, for example, for the wear-free detection of end positions of moving machine parts. The surface of the sensor emits an oscillating electromagnetic field. If metal objects get into the measurement area, they absorb a small amount of energy from the oscillator. If the energy transfer reaches a threshold value, the target object detection is confirmed and the sensor output changes its state.

Capacitive sensors consist of two metallic parts isolated from one another and can detect both metallic and non-metallic materials. The measurement takes place without contact by changing the capacitance of an electrical capacitor. Since the capacitance of a capacitor changes with the distance between its electrodes, this measurable variable is used for distance measurement. With the help of capacitive sensors, people in the vicinity of the robot can be reliably detected, for example.

Magnetic sensors are used for contactless and exact position detection and also recognize magnets through stainless steel, plastic or wooden constructions. The sensors follow the so-called GMR effect (GMR is the English abbreviation for "giant magnetoresistance"). This effect occurs in structures that consist of alternating magnetic and non-magnetic thin layers with a thickness of a few nanometers. The effect has the effect that the electrical resistance of the structure depends on the mutual orientation of the magnetization of the magnetic layers. The magnetization in opposite directions is significantly higher than that in the same direction.

Touch sensors (also called tactile sensors) sense mechanical contact with objects and derive signals from them, which are then sent on. A gripper arm can, for example, determine the shape and position of an object with the help of touch sensors. Even if a sensor cannot yet keep up with the human sense of touch, new types of tactile sensors can mimic the mechanical properties and tactile receptors of the human fingertips. This should allow the robot to autonomously dose the intensity of the grip depending on the nature of the object being gripped, an important property especially in human-machine interaction.

In robotics, optical or visual sensors have the task of extracting information from an image or an image sequence, analyzing it and acting or reacting on this basis. The data are recorded, for example, by one or more cameras (2D or 3D) or a scanner. Optical sensors play a major role in the navigation of the robots and their orientation in the environment.

There are many different ways a robot can get from A to B. The most common are robots with wheels because they are easy to control and move in an energy-efficient manner. Often, however, alternative modes of transport are more suitable, for example in rough terrain or when moving in confined spaces. A major challenge in this area is the autonomous locomotion of robots. This means that the robot independently decides which type of locomotion is best suited for the respective situation and environment.

The most common method of getting around is on four wheels. There are also robots with one or two wheels to increase mobility and save components. Robots with six or more wheels are also used in the field.

An example of this type of locomotion is the feeding robot in agriculture. The feed container with the mixing device and weighing device hangs on a rail or is guided to the side. The power is supplied via rechargeable batteries, a trailing cable or a supply rail. The robot is controlled by a process computer built into the container. The feeding robot picks up new feed from stationary storage or mixing containers.

Robots that walk upright like a human on two legs still pose major challenges for developers, especially with regard to stability. The ZMP (Zero Moment Point) or zero moment point algorithm is a solution from Honda that is used in the ASIMO robot to move on two legs. However, this model needs a flat surface to move around. This robot is therefore not suitable for excursions into the field. A more advanced method is to use a dynamic balancing algorithm. It is more robust than the ZMP technology, as the movement of the robot is constantly monitored and the feet are placed accordingly to ensure stability. Robots using this technique can even jump. Another approach is passive dynamics, in which the momentum of the swinging limbs is used for greater efficiency. With this technology, a robot can also run up a hill and is said to move more than ten times more efficiently than robots with ZMP technology. The most impressive example of mobility and balance is currently from Boston Dynamics: The latest version of the Atlas walking robot masters jumps and even a backflip.

When it comes to flying robots, the first thing that comes to mind is the renamed drone, which has since become indispensable in both the civil and military sectors. But there are also other interesting concepts, such as the EU project ARCAS (Aerial Robotics Cooperative Assembly System). Scientists from the German Aerospace Center have integrated a robotic gripper arm into an autonomously flying helicopter. This robot is used to inspect and repair pipelines. Other areas of application are the maintenance of satellites or industrial plants or the construction of infrastructures on other planets. Harvard University researchers developed robotic bees in 2013 that can fly and dive underwater. In the future, these tiny robots will take over the tasks of the endangered bees and pollinate plants.

Mobile robots are equipped with a combination of navigation hardware and software to perceive the environment, to navigate optimally and to be able to react to dynamic events such as people or moving objects. Usually a mixture of GPS navigation device, radar sensors, but also lidar technology or cameras ensure that the robots navigate and act safely in their surroundings.